For my qa instance of WSO2 API Manager (2.0.0), I am facing issues with the log files increasing rapidly. I tried to manage the log file sizes through config settings (Carbon and Audit). However, I find the other logs files are filling fast - especially the http_access*.log and wso2-apigw-errors.log. Right now I have written a shell script to remove them periodically. The real problem behind these growing logs files has to do with the corrupted METRIC H2 DB, I think.
TID: [] [] [2017-11-11 08:45:37,589] ERROR
{org.wso2.carbon.metrics.jdbc.reporter.JDBCReporter} - Error when
reporting gauges {org.wso2.carbon.metrics.jdbc.reporter.JDBCReporter}
org.h2.jdbc.JdbcSQLException: Sequence
"SYSTEM_SEQUENCE_1AC7F1C3_AD26_4518_BBF0_1E63028E0201" not found; SQL statement:
CREATE CACHED TABLE PUBLIC.METRIC_TIMER(
ID BIGINT DEFAULT (NEXT VALUE FOR PUBLIC.SYSTEM_SEQUENCE_1AC7F1C3_AD26_4518_BBF0_1E63028E0201) NOT NULL NULL_TO_DEFAULT SEQUENCE PUBLIC.SYSTEM_SEQUENCE_1AC7F1C3_AD26_4518_BBF0_1E63028E0201,
SOURCE VARCHAR(255) NOT NULL,
TIMESTAMP BIGINT NOT NULL,
NAME VARCHAR(255) NOT NULL,
COUNT BIGINT NOT NULL,
MAX DOUBLE NOT NULL,
MEAN DOUBLE NOT NULL,
MIN DOUBLE NOT NULL,
STDDEV DOUBLE NOT NULL,
P50 DOUBLE NOT NULL,
P75 DOUBLE NOT NULL,
P95 DOUBLE NOT NULL,
P98 DOUBLE NOT NULL,
P99 DOUBLE NOT NULL,
P999 DOUBLE NOT NULL,
MEAN_RATE DOUBLE NOT NULL,
M1_RATE DOUBLE NOT NULL,
M5_RATE DOUBLE NOT NULL,
M15_RATE DOUBLE NOT NULL,
RATE_UNIT VARCHAR(50) NOT NULL,
DURATION_UNIT VARCHAR(50) NOT NULL
) [90036-140]
For now, I have disabled the Metrics (metrics.xml). How can I reset the Metrics H2 DB and start collecting metrics again? Or how can I point the Metrics DB to an RDBMS?
1) To get the H2 working:
Shut down the server.
Delete WSO2METRICS_DB.h2.db and WSO2METRICS_DB.lock.db files in
<APIM_HOME>/repository/database/.
Start the server again with ./wso2server.sh -Dsetup
2) Update <APIM_HOME>/repository/conf/datasources/metrics-datasources.xml with your RDBMS database details and restart the server.
Related
As posted on another forum, when upgrading XWiki from v7.0.1 to v13.10.9, a non-critical database table xwikistatsvisit, for the user visiting statistics, was preventing the after-upgrade migrations. It contained over seven million records and sized 3GB in total. As a workaround, we had to delete all records in the table, but the SQL command delete from table xwikistatsvisit took over two hours.
I have verified from ER diagram that the table is stand-alone without any foreign key referring to or from other tables. And the database is MariaDB v10.9.2 installed on the same host.
The host under test is a medium virtual machine with SSD, 4 CPUs of Intel i9 and 8GB of RAM, running MariaDB v10.9.2. Also, the hypervisor needs to enable “PAE/NX” and “Nested VT-x/ADM-V” for higher performance; otherwise, the computing task will get stuck forever.
My Questions:
Why the SQL command took so long? Is there any way to proceed faster? E.g. disabling the keys and restrictions, etc.; but I am unfamiliar with this area.
I will highly appreciate any hints or suggestions.
The definition of the table xwikistatsvisit:
--
-- Table structure for table `xwikistatsvisit`
--
DROP TABLE IF EXISTS `xwikistatsvisit`;
/*!40101 SET #saved_cs_client = ##character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `xwikistatsvisit` (
`XWV_ID` bigint(20) NOT NULL,
`XWV_NUMBER` int(11) DEFAULT NULL,
`XWV_NAME` varchar(255) NOT NULL,
`XWV_CLASSNAME` varchar(255) DEFAULT NULL,
`XWV_IP` varchar(255) NOT NULL,
`XWV_USER_AGENT` longtext NOT NULL,
`XWV_COOKIE` longtext NOT NULL,
`XWV_UNIQUE_ID` varchar(255) NOT NULL,
`XWV_PAGE_VIEWS` int(11) DEFAULT NULL,
`XWV_PAGE_SAVES` int(11) DEFAULT NULL,
`XWV_DOWNLOADS` int(11) DEFAULT NULL,
`XWV_START_DATE` datetime DEFAULT NULL,
`XWV_END_DATE` datetime DEFAULT NULL,
PRIMARY KEY (`XWV_ID`),
KEY `XWVS_END_DATE` (`XWV_END_DATE`),
KEY `XWVS_UNIQUE_ID` (`XWV_UNIQUE_ID`),
KEY `XWVS_PAGE_VIEWS` (`XWV_PAGE_VIEWS`),
KEY `XWVS_START_DATE` (`XWV_START_DATE`),
KEY `XWVS_NAME` (`XWV_NAME`),
KEY `XWVS_PAGE_SAVES` (`XWV_PAGE_SAVES`),
KEY `XWVS_DOWNLOADS` (`XWV_DOWNLOADS`),
KEY `XWVS_IP` (`XWV_IP`),
KEY `xwv_user_agent` (`XWV_USER_AGENT`(255)),
KEY `xwv_classname` (`XWV_CLASSNAME`),
KEY `xwv_number` (`XWV_NUMBER`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = #saved_cs_client */;
The DELETE command deletes all the matching rows one at a time. If you want to delete all the records from such a big table, TRUNCATE should be a lot faster, since it empties the full table at once.
TRUNCATE TABLE xwikistatsvisit;
Difference between DELETE and TRUNCATE.
I have a question for changing the backend connection from SQLite to SQL Server. After passing in the correct connection string for sql_alchemy_conn, I run this command: airflow initdb. I get the following error:
sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 13 for SQL Server][SQL Server]A table can only have one timestamp column. Because table 'task_reschedule' already has one, the column 'start_date' cannot be added. (2738) (SQLExecDirectW)") [SQL: '\nCREATE TABLE task_reschedule (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\ttask_id VARCHAR(250) NOT NULL, \n\tdag_id VARCHAR(250) NOT NULL, \n\texecution_date TIMESTAMP NOT NULL, \n\ttry_number INTEGER NOT NULL, \n\tstart_date TIMESTAMP NOT NULL, \n\tend_date TIMESTAMP NOT NULL, \n\tduration INTEGER NOT NULL, \n\treschedule_date TIMESTAMP NOT NULL, \n\tPRIMARY KEY (id), \n\tCONSTRAINT task_reschedule_dag_task_date_fkey FOREIGN KEY(task_id, dag_id, execution_date) REFERENCES task_instance (task_id, dag_id, execution_date)\n)\n\n'] (Background on this error at: http://sqlalche.me/e/f405)
So this works for me:
In the file: 0a2a5b66e19d_add_task_reschedule_table.py add this:
def mysql_datetime():
return mysql.DATETIME(timezone=True)
and replace any lines which has timestamp() such as the below:
sa.Column('execution_date', timestamp(), nullable=False, server_default=None),
with this:
sa.Column('execution_date', mysql_datetime(), nullable=False, server_default=None),
Once I made this change, the above error disappears but I am not sure if there are any other unintended consequences. If so I will update here or just resort to using MySQL database.
I'm trying to create a database from the E/R Diagram I just created using MySQL WorkBench, can someone please help?
Executing SQL script in server
ERROR: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near
CONSTRAINT `IDProveedor`
FOREIGN KEY (`IDproveedor`)
REFERENCES `Repu
at line 11
SQL Code:
-- -----------------------------------------------------
-- Table `Repuestos`.`Articulo`
-- -----------------------------------------------------
CREATE TABLE IF NOT EXISTS `Repuestos`.`Articulo` (
`IDArticulo` INT NOT NULL AUTO_INCREMENT,
`Nombre` VARCHAR(45) NOT NULL,
`IDproveedor` INT NOT NULL,
`Articulocol` VARCHAR(45) NOT NULL,
`Valor_Unitario` INT NOT NULL,
PRIMARY KEY (`IDArticulo`),
INDEX `NitProveedor_idx` (`IDproveedor` ASC) VISIBLE,
CONSTRAINT `IDProveedor`
FOREIGN KEY (`IDproveedor`)
REFERENCES `Repuestos`.`Proveedor` (`IDProveedor`)
ON DELETE NO ACTION
ON UPDATE NO ACTION)
ENGINE = InnoDB
SQL script execution finished: statements: 6 succeeded, 1 failed
Fetching back view definitions in final form.
Nothing to fetch.
I'm trying to create some deployment tools and I don't want to use BTEQ. I've been trying to work with the Teradata.Client.Provider in PowerShell but I'm getting syntax errors on the creation of a table.
[Teradata Database] [3706] Syntax error: expected something between
';' and the 'IF' keyword.
SELECT * FROM DBC.TablesV WHERE DatabaseName = DATABASE AND TableName = 'MyTable';
IF ACTIVITYCOUNT > 0 THEN GOTO EndStep1;
CREATE MULTISET TABLE MyTable ,
NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
MyColId INTEGER GENERATED ALWAYS AS IDENTITY
(START WITH 1
INCREMENT BY 1
MINVALUE 0
MAXVALUE 2147483647
NO CYCLE)
NOT NULL,
MyColType VARCHAR(50) NULL,
MyColTarget VARCHAR(128) NULL,
MyColScriptName VARCHAR(256) NULL,
MyColOutput VARCHAR(64000) NULL,
isMyColException BYTEINT(1) NULL,
ExceptionOutput VARCHAR(64000) NULL,
MyColBuild VARCHAR(128) NULL,
MyColDate TIMESTAMP NOT NULL
)
PRIMARY INDEX PI_MyTable_MyColLogId(MyColLogId);
LABEL EndStep1;
I would rather not use BTEQ as I've not found it has worked well in other deployment tools we have created and requires a bit of hacks. Is there anything I can use that would avoid using that tool?
What Parse error?
The CREATE will fail due to double INTEGER in MyColId and VARCHAR(max) in ExceptionOutput, it's an unknown datatype in Teradata.
I've just performed a clean operation, i've added a new schema one the prop file then ran clean again to remove that schema from db then when i tried to do a migrate it doesnt allow me i get the following:
Creating Metadata table: [xx].[schema_version]
Error executing statement at line 17: CREATE TABLE [xx].[schema_version] (
[version_rank] INT NOT NULL,
[installed_rank] INT NOT NULL,
[version] NVARCHAR(50) NOT NULL,
[description] NVARCHAR(200),
[type] NVARCHAR(20) NOT NULL,
[script] NVARCHAR(1000) NOT NULL,
[checksum] INT,
[installed_by] NVARCHAR(30) NOT NULL,
[installed_on] DATETIME NOT NULL DEFAULT GETDATE(),
[execution_time] INT NOT NULL,
[success] BIT NOT NULL
);
CREATE INDEX [schema_version_vr_idx] ON [xx].[schema_version] ([version_rank]);
CREATE INDEX [schema_version_ir_idx] ON [xx].[schema_version] ([installed_rank]);
CREATE INDEX [schema_version_s_idx] ON [xx].[schema_version] ([success]);
Also when i tried to initialize it sing init i get the following:
Creating Metadata table: [xx].[schema_version]
Error executing statement at line 17: CREATE TABLE [xx].[schema_version] (
[version_rank] INT NOT NULL,
[installed_rank] INT NOT NULL,
[version] NVARCHAR(50) NOT NULL,
[description] NVARCHAR(200),
[type] NVARCHAR(20) NOT NULL,
[script] NVARCHAR(1000) NOT NULL,
[checksum] INT,
[installed_by] NVARCHAR(30) NOT NULL,
[installed_on] DATETIME NOT NULL DEFAULT GETDATE(),
[execution_time] INT NOT NULL,
[success] BIT NOT NULL
);
CREATE INDEX [schema_version_vr_idx] ON [xx].[schema_version] ([version_rank]);
CREATE INDEX [schema_version_ir_idx] ON [xx].[schema_version] ([installed_rank]);
CREATE INDEX [schema_version_s_idx] ON [xx].[schema_version] ([success]);
ERROR: Occured in com.googlecode.flyway.core.dbsupport.SqlScript.execute() at line 91
ERROR: Caused by com.microsoft.sqlserver.jdbc.SQLServerException: The specified schema name "xx" either does not exist or you do not have permission to use it.
ERROR: Occured in com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError() at line 197
Please not that I've dropped all the objects and I can confirm they do not exist on my DB instance.
How can I overcome this? I am concerned when using the tool in dev and prod environment we can't just delete the db instance and start again. at this point I cant use the tool to do the migration and i dont want to delete the db to overcome this issue.
Flyway's schema creation is currently an all or nothing deal. Either all schemas are missing and then all will be created or at least one schema is present and then none will be created.
For you this means that you must either create xx yourself or drop the other schemas first.