proenv>proserve dbname -S 2098 -H hostname -B 10000
OpenEdge Release 11.6 as of Fri Oct 16 19:02:26 EDT 2015
11:00:35 BROKER This broker will terminate when session ends. (5405)
11:00:35 BROKER The startup of this database requires 46Mb of shared memory. Maximum segment size is 1024Mb.
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
11:00:35 BROKER : Removed shared memory with segment_id: 39714816 (16869)
11:00:35 BROKER ** This process terminated with exit code 1. (8619)
I am getting the above error when I tried to start the progress database...
This is the problem:
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
My guess is, you have just created the DB using prostrct create. You need to procopy an empty db into your db so that if has the schema tables.
procopy empty yourdbname
See: http://knowledgebase.progress.com/articles/Article/P7713
The database is void it means, it does not have any metaschema.
First create your database using .st file (use prostrct create) than
copy meta schema table using emptyn.
Ex : procopy emptyn urdbname.
Then try to start your database.
Related
I have a HANDLER command which works in MariaDB 10.5.15 but fails on 10.5.16.
HANDLER INDEX_NAME READ INDEX_NAME = (....,'......') LIMIT 1;
The failure happens in'mysql_store_result'.
is there any change in 10.5.16 causing the failure?
from 10.6.x, works again.
I'm trying to run airflow with Azure SQL database as backend using mssql+pyodbc connection string(all relevant drivers have been installed).
while airflow is able to connect to DB and create tables, i.e, airflow initdb runs successfully, I'm facing issues while running airflow scheduler, as a result, the tasks triggered are always in "running" state.
This is the error I get while running airflow scheduler:
*sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near '1'. (102) (SQLExecDirectW)")
[SQL: SELECT dag.dag_id AS dag_dag_id
FROM dag
WHERE dag.is_paused IS 1 AND dag.dag_id IN (?)]
[parameters: ('example_http_operator',)]*
(Background on this error at: http://sqlalche.me/e/13/f405)
I'm using apache-airflow==1.10.11.
If you were able to run airflow + azure SQL DB with any configuration please feel free to jump in.
I found a document and talk the configuration about run airflow + azure SQL DB. Maybe it's helpful for you.
Ref: Setting up Airflow on Azure & connecting to MS SQL Server
This post also give some configurations about it: Apache Airflow - Connection issue to MS SQL Server using pymssql + SQLAlchemy
For MSSQL as backend DB, there is workaround in Airflow#10713. I using apache-airflow==1.10.15 and solved same error as yours.
The command suggested is attached, but I use vi update instead of run sed command.
RUN sed -i 's/import copy/import copy,sqlalchemy/g' /usr/local/lib/python3.6/site-packages/airflow/models/dag.py \ && sed -i 's/DagModel.is_paused.is_(True)/DagModel.is_paused == sqlalchemy.sql.expression.true()/g' /usr/local/lib/python3.6/site-packages/airflow/models/dag.py
On my virtual server (Win Server 2012R2) I run some TCL progams with sqlite3 database.
All parts of TCL, sqlite, database (~25.000 items) and code are located on one shared folder C:\data that I can also access from my local PC (Win10) as drive S:. Some more programs work in the same manner, they are scheduled to run in the night.
The code, now competed:
console show
set auto_path [file join [file dirname [info script]] "../lib"]
package require sqlite3
set datasource "./compdata.db3"
wm withdraw .
sqlite3 db $datasource
db eval {SELECT * FROM master_data;} {
puts "$x1, $x2, $x3, $x4" ; update
}
db close
puts stderr ">>>Ok<<<"
exit
runs perfect locally on the server and started from PC.
Start command on server:
call c:\data\tcl858\bin\wish85.exe databasetest.t85
Start command on local PC:
call s:\tcl858\bin\wish85.exe databasetest.t85
If I add ORDER BY it runs only when I start it from PC.
db eval {SELECT * FROM master_data ORDER BY x1;} {
puts "$x1, $x2, $x3, $x4" ; update
}
Started on server it gets an error message:
Unable to open database file while executing "db eval {SELECT * FROM
master_data ORDER BY x1;} { puts "$x1, $x2, $x3, $x4" ; update }".
We are using MySQL in Aurora cluster
We have 2 instances - master and slave.
We are working with spring transactions on top of a c3po connection pool.
We are using mariadb jdbc driver (version 2.2.3).
Our url looks like this -
jdbc:mysql:aurora:myclaster-cluster.cluster-xxxxxx.us-east-1.rds.amazonaws.com:3306/db?rewriteBatchedStatements=true
When testing failover; every few failovers we get into a state of using a read only connection -
Caused by: org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException for SQL [INSERT INTO a (a1, a2, a3, a4) VALUES (?, ?, ?, ?) on duplicate key update ]; SQL state [HY000]; error code [1290]; (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement; nested exception is java.sql.SQLException: (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:645)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:866)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:927)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:937)
at com.persistence.impl.MyDao.insert(MyDao.java:52)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:75)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
... 1 common frames omitted
Caused by: java.sql.SQLException: (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:198)
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110)
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.executeInternal(MariaDbPreparedStatementClient.java:216)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.execute(MariaDbPreparedStatementClient.java:150)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.executeUpdate(MariaDbPreparedStatementClient.java:183)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:410)
at org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:873)
at org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:866)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:629)
... 9 common frames omitted
How can we force the driver to return connections only to the master instance? Is there a way to force aurora to close all open connection upon failover?
Thanks
We solved the problem by implementing an ExceptionInterceptor, in which we closed the connection, forcing the pool to create a new one.
This workround is relevant for mysql-connector-java 5.1.47
#Override
public SQLException interceptException(SQLException sqlEx, Connection conn) {
if (sqlEx.getErrorCode() == READ_ONLY_ERROR_CODE) {
log.warn("Got read only exception closing the connection {} ", sqlEx.getMessage());
closeConnection(conn);
}
return sqlEx;
}
While installing oracle 11g XE on docker i am getting the error.
Following are the output:-
/etc/init.d/oracle-xe configure
Oracle Database 11g Express Edition Configuration
This will configure on-boot properties of Oracle Database 11g Express
Edition. The following questions will determine whether the database should
be starting upon system boot, the ports it will use, and the passwords that
will be used for database accounts. Press to accept the defaults.
Ctrl-C will abort.
Specify the HTTP port that will be used for Oracle Application Express [8080]:8080
Specify a port that will be used for the database listener [1521]:1521
Specify a password to be used for database accounts. Note that the same
password will be used for SYS and SYSTEM. Oracle recommends the use of
different passwords for each database account. This can be done after
initial configuration:
Confirm the password:
Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]:y
Starting Oracle Net Listener...Done
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
[root#b7c63c4e1da8 Disk1]# cd /u01/app/oracle/product/11.2.0/xe/config/log
[root#b7c63c4e1da8 log]# ls
CloneRmanRestore.log cloneDBCreation.log postDBCreation.log postScripts.log
[root#b7c63c4e1da8 log]# cat CloneRmanRestore.log
ORA-00845: MEMORY_TARGET not supported on this system
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
declare
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
One of the possible solution that I got was to mount the temp file to provide extra space to it which only contains 6GB approx in the docker. But i am unable to mount the memory in docker.
Got the solution for the same :-
we have to modify the files init.ora and initXETemp.ora at the path /u01/app/oracle/product/11.2.0/xe/config/scripts
with the values :-
###########################################
# Miscellaneous
###########################################
compatible=11.2.0.0.0
diagnostic_dest=/u01/app/oracle
#memory_target=1073741824
pga_aggregate_target=200540160
sga_target=601620480
You may encounter
ORA-00845: MEMORY_TARGET not supported on this system
when starting Oracle DB in an unprivileged container. Try running the container with the --privileged flag, e.g.
docker run --name oracle12 --hostname oracledb --privileged local/oracle12:12.1.0.2