While installing oracle 11g XE on docker i am getting the error.
Following are the output:-
/etc/init.d/oracle-xe configure
Oracle Database 11g Express Edition Configuration
This will configure on-boot properties of Oracle Database 11g Express
Edition. The following questions will determine whether the database should
be starting upon system boot, the ports it will use, and the passwords that
will be used for database accounts. Press to accept the defaults.
Ctrl-C will abort.
Specify the HTTP port that will be used for Oracle Application Express [8080]:8080
Specify a port that will be used for the database listener [1521]:1521
Specify a password to be used for database accounts. Note that the same
password will be used for SYS and SYSTEM. Oracle recommends the use of
different passwords for each database account. This can be done after
initial configuration:
Confirm the password:
Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]:y
Starting Oracle Net Listener...Done
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
[root#b7c63c4e1da8 Disk1]# cd /u01/app/oracle/product/11.2.0/xe/config/log
[root#b7c63c4e1da8 log]# ls
CloneRmanRestore.log cloneDBCreation.log postDBCreation.log postScripts.log
[root#b7c63c4e1da8 log]# cat CloneRmanRestore.log
ORA-00845: MEMORY_TARGET not supported on this system
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
declare
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
One of the possible solution that I got was to mount the temp file to provide extra space to it which only contains 6GB approx in the docker. But i am unable to mount the memory in docker.
Got the solution for the same :-
we have to modify the files init.ora and initXETemp.ora at the path /u01/app/oracle/product/11.2.0/xe/config/scripts
with the values :-
###########################################
# Miscellaneous
###########################################
compatible=11.2.0.0.0
diagnostic_dest=/u01/app/oracle
#memory_target=1073741824
pga_aggregate_target=200540160
sga_target=601620480
You may encounter
ORA-00845: MEMORY_TARGET not supported on this system
when starting Oracle DB in an unprivileged container. Try running the container with the --privileged flag, e.g.
docker run --name oracle12 --hostname oracledb --privileged local/oracle12:12.1.0.2
Related
My goal is to access Azure Synapse Analytics from Azure Databricks. The first thing that came in mind is to use the spark driver com.databricks.spark.sqldw. But for that, the database user needs to be db_owner in the database, which is not suitable, since users could mess around with Synapse. I just want users to read data from Synapse using their own Active Directory accounts.
My second shot then is to try using ODBC driver (or JDBC) to access Synapse as we normally do in local Python scripts. The problem is that our Databricks clusters have no internet access, so we can't just run apt-get like commands (in order to install the ODBC drivers).
So, any of these questions may help me to solve the problem:
How do I copy a file from Azure Storage Account Gen2 to the local databricks cluster file system? I put the ODBC Driver 17 for SQL Server (msodbcsql17_17.10.1.1-1_amd64.deb) in a Container in Storage Account and I can see it using dbutils. But, can I copy that file to the databricks cluster filesystem?
Is it possible to use the default spark driver com.databricks.spark.sqldw for accessing Azure Synapse Analytics with SELECT permission only?
For anyone wondering why I'm not using the Synapse Apache Spark Pool is because I can't run queries like SELECT * FROM A INNER JOIN B.... And of couse, the Databricks UI is much better ; )
Thanks for any help.
Generically answering to my own question of "how do I access a file from Azure Storage Acount Gen2 using the Databricks cluster filesystem" is quite simple. Just had to mount it using dbutils and the mount will automatically appears in /dbfs/mnt. For instance (python script):
dbutils.fs.mount(
source = "wasbs://<container>#<storage>.blob.core.windows.net"
, mount_point = "/mnt/my_adlsgen2"
, extra_configs = {"fs.azure.account.key.<storage>.blob.core.windows.net":"account key"})
and the mount will be at:
/dbfs/mnt/my_adlsgen2
Finally, you can go to the above folder and list files or do whatever you want with it.
In order to install the ODBC Driver that is in the ADLS Gen2, I had to open a new cell and run:
%sh
echo msodbcsql17 msodbcsql/ACCEPT_EULA boolean true | sudo debconf-set-selections
sudo dpkg -i /dbfs/mnt/my_adlsgen2/msodbcsql17_17.10.1.1-1_amd64.deb
And checking if it installed ok:
%sh
odbcinst -j
unixODBC 2.3.6
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /root/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
and lastly:
%sh cat /etc/odbcinst.ini
[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.1.1
UsageCount=1
I'm trying to run airflow with Azure SQL database as backend using mssql+pyodbc connection string(all relevant drivers have been installed).
while airflow is able to connect to DB and create tables, i.e, airflow initdb runs successfully, I'm facing issues while running airflow scheduler, as a result, the tasks triggered are always in "running" state.
This is the error I get while running airflow scheduler:
*sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near '1'. (102) (SQLExecDirectW)")
[SQL: SELECT dag.dag_id AS dag_dag_id
FROM dag
WHERE dag.is_paused IS 1 AND dag.dag_id IN (?)]
[parameters: ('example_http_operator',)]*
(Background on this error at: http://sqlalche.me/e/13/f405)
I'm using apache-airflow==1.10.11.
If you were able to run airflow + azure SQL DB with any configuration please feel free to jump in.
I found a document and talk the configuration about run airflow + azure SQL DB. Maybe it's helpful for you.
Ref: Setting up Airflow on Azure & connecting to MS SQL Server
This post also give some configurations about it: Apache Airflow - Connection issue to MS SQL Server using pymssql + SQLAlchemy
For MSSQL as backend DB, there is workaround in Airflow#10713. I using apache-airflow==1.10.15 and solved same error as yours.
The command suggested is attached, but I use vi update instead of run sed command.
RUN sed -i 's/import copy/import copy,sqlalchemy/g' /usr/local/lib/python3.6/site-packages/airflow/models/dag.py \ && sed -i 's/DagModel.is_paused.is_(True)/DagModel.is_paused == sqlalchemy.sql.expression.true()/g' /usr/local/lib/python3.6/site-packages/airflow/models/dag.py
I have a MariaDB (10.4.14) Master-Slave configuration and I want to use Percona pt-table-checksum. I have installed Percona Toolkit in the Master host, which includes pt-table-checksum 3.2.1.
I have created a user to run pt-table-checksum and granted his privileges:
GRANT REPLICATION SLAVE,PROCESS,SUPER, SELECT ON *.* TO `checksum_user`#'%' IDENTIFIED BY 'checksum_password';
GRANT ALL PRIVILEGES ON percona.* TO `checksum_user`#'%';
However, when I try to run the tool, I always get the following error:
pt-table-checksum --replicate=percona.checksums --ignore-databases mysql --no-check-binlog-format h=localhost, u=checksum_user, p=checksum_password
Usage: pt-table-checksum [OPTIONS] [DSN]
Errors in command-line arguments:
* More than one host specified; only one allowed
Instead of using the DSN, I have also tried the options --host,--user and --password, but the results are the same.
What am I doing wrong?
I was trying to setup ODBC connection for Hive. I followed the below steps but it didn't worked.
User DSN-->Add--> Hortonworks Hive ODBC Driver --> and I gave below details
Host : IP of the Primary name node cluster
Port:10001
Server Type : Hive Server 2
Authentication Mechanism : User Name --> hadoop
While testing the connection, it throws the following error
Error:
Driver Version: V1.2.13.1018
Running connectivity tests...
Attempting connection
Failed to establish connection
SQLSTATE: HY000[Hortonworks][Hardy] (34) Error from Hive: connect() failed: errno = 10061.
TESTS COMPLETED WITH ERROR.
Could you please tell me if the port I use is correct ? If not, what port should I try ? The port 10000 doesn't work either.
I am using HDP 2.0 on Windows 2012 R2 Server (Single Node Cluster). I Installed Hive ODBC Driver from Microsoft site. I gave my Host Name and Port :10001 and user as hive. When I installed HDP 2.0 in Win 2012 Server R2, I gave the Hive User Name as hive. I am able to connect successfully.
The answer of your problem is that first of all: check on your virtual machine that the port "10000" is added because it's not added by default.
If the port is there, you might check the hive Server if it's running from your virtual machine
I hope it will help.
under the mechanism changed it to user name only.
Using impala-shell, I can see the hive metastore, use any data base created by Hive and query any table created by Hive. When I try to create a table in impala-shell or do a "invalidate metadata", I get
"ERROR: Couldn't open transport for localhost:26000(connect() failed: Connection refused)"
Have following configuration. This is a multi-node cluster configuration * built by hand i.e. without using Cloudera Manager *
CentOS 6
CDH4.5
Impala 1.2.1
Hive MySQL Metastore
impalad are running on multiple nodes with data nodes
statestored and catalogd is running on a single node that is NOT impalad node
In /etc/default/impala I have changed IMPALA_STATE_STORE_HOST to point to IP of the statestored machine
From the /var/log/impala/catalogd.INFO, it seems 26000 is used by catalog service as there is a line in this file "--catalog_service_port=26000"
Just as /etc/default/impala has to tell Impalad where is the statestore (using IMPALA_STATE_STORE_HOST), I am wondering if for 1.2.1 (where catalogd is introduced) there has to be an additional entry for catalogd location as well - just a guess ....
Any help is appreciated.
Thanks,
you have to start the impalad with the option -catalog_service_host=fqdn_to_your_catalog_host.
unfortunately this is not yet in the default configuration so you have to add it yourself
change /etc/default/impala
CATALOG_SERVICE_HOST=fqdn_to_your_catalog_host
IMPALA_SERVER_ARGS=add: -catalog_service_host=${CATALOG_SERVICE_HOST}
restart impalad and it should work now :-)