Hard override entry in tnsnames.ora - oracle11g

I have a set of shell scripts and sqlplus commands.
These connect to Oracle DB_ONE and DB_TWO.
I am upgrading DB_ONE.
For my testing I override the DB_ONE entry in a local tnsnames.ora.
There exists a global tnsnames.ora with all connections in it.
export TNS_ADMIN=/path/to/local/tnsnames:/path/to/global/tnsnames
This way, I am able to connect to DB_ONE on my_new.server and DB_TWO on some.other.server as expected.
However, if I break my_new.server, sqlplus automatically connects to DB_ONE on original.server. So it fails silently and fails over to the connection in the global tnsnames file. I would like this connection to fail completely.
Is there a way to have a 'hard' override such that sqlplus will only try a DB_ONE connection from the local tnsnames.ora, whilst being free to try DB_TWO connections from all tnsnames.ora files?
My local tnsnames.ora
DB_ONE=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(PORT=1524)
(HOST=my_new.server)
)
)
(CONNECT_DATA=
(SERVICE_NAME=DB_ONE)
)
)
Global tnsnames.ora which I cannot change
DB_ONE=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(PORT=1524)
(HOST=original.server)
)
)
(CONNECT_DATA=
(SERVICE_NAME=DB_ONE)
)
)
DB_TWO=
(DESCRIPTION=
(ADDRESS_LIST=
(ADDRESS=
(PROTOCOL=TCP)
(PORT=1524)
(HOST=some.other.server)
)
)
(CONNECT_DATA=
(SERVICE_NAME=DB_TWO)
)
)

This is not valid:
export TNS_ADMIN=/path/to/local/tnsnames:/path/to/global/tnsnames
TNS_ADMIN is a single directory path, not a searchable list like $PATH or $LD_LIBRARY_PATH etc. The documentation mentions that:
If the TNS_ADMIN environment variable is not set, then Oracle Net will check the ORACLE_HOME/network/admin directory.
It doesn't say so, but it also defaults to checking the network/admin directory if the TNS_ADMIN variable does not point to a valid directory, and as your colon-seperated list isn't a valid directory path, it will use the tnsnames.ora under $ORACLE_HOME/network/admin.
That means your local 'override' file is never being used, and you were accessing which ever instance DB_ONE points to in the global file. It's not that the TNS entry from the second file is used if the first fails - that mechanism just doesn't exist. (You can have failover within a file but that's different).
Assuming you have connection strings using a TNS alias like user/pwd#DB_ONE and you can't change those for your testing, your only real option is to make a complete copy of the global file and just edit the entry for DB_ONE:
cp /path/to/global/tnsnames/tnsnames.ora /path/to/local/tnsnames/
edit /path/to/local/tnsnames/tnsnames.ora
export TNS_ADMIN=/path/to/local/tnsnames
Or as #ibre5041 mentioned in a comment, as you're on Linux you could skip the TNS_ADMIN environment variable and use ~/.tnsnames.ora for your local copy.
As you mention that won't reflect any changes made to the global file, but presumably once you've finished your testing you can trash your local file or revert to the global TNS_ADMIN anyway.

Related

MariaDB default charset

Documentation for MariaDB says clearly how to set up server defaults like charset and collation. So I opened my Mariadb console and run:
SET character_set_server = 'utf8mb4';
SET collation_server = 'utf8mb4_slovak_ci';
Consoles wrote OK.
Then I restart the server, but as I tried to create new database there are still the same latin2 charset and swedish collation.
I do it automatically via Symfony cosole command
php bin/console doctrine:database:create
What is wrong with that? I did it like documentation says.
SET character_set_server changes the character set for the current connection.
SET global character_set_server changes the character set globally for each new connection.
However if you restart your server the default character sets for server will be read from the configuration file. If the configuration file doesn't specify character set, then defaults will be used. So for making your settings permanent, you have to change the character sets in your configuration file (https://mariadb.com/kb/en/configuring-mariadb-with-option-files/)
First, run the SHOW VARIABLES like "%collation%"; to show you the current configs.
To change collation_server setting, you have to use the keyword GLOBAL and therefore your statement will be SET GLOBAL collation_server = 'utf8mb4_slovak_ci';

Envrypt sql_alchemy_conn in airflow config file (ansible)

Is there a way to encrypt the airflow config file sql_alchemy_conn string , the password shown in example is plaintext . What options are there to secure it. Also if the password has special chars how it must be escaped in the config file
Trying to install airflow using airflow role.
# See: https://www.sqlalchemy.org/
sql_alchemy_conn:
value: "postgresql+psycopg2://pgclusteradm#servername:PLAINTEXTPASSWORD#server.postgres.database.azure.com/airflow2"
Way to encrypt password, couldn't find how to encrypt this.
You can provide the database URI through environment variables instead of the config file. This doesn't encrypt it or necessarily make it more secure, but it at least isn't plainly sitting in a permanent file.
In your airflow.cfg you can put a placeholder:
[core]
...
sql_alchemy_conn = override_me
...
Then set AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://... in an environment variable when you bring up Airflow components. This way of setting and overriding configuration options through environment variables is detailed in the docs, but the basic format is AIRFLOW__{SECTION}__{KEY}=<value>.
There are 2 ways of securing this as mentioned in docs:
1) Environment Variable:
You can override the setting in airflow.cfg by setting the following environment variable:
AIRFLOW__CORE__SQL_ALCHEMY_CONN=my_conn_string
This way you can keep the setting in airflow.cfg as empty so no one can view the password.
2) Get string by running command:
You can also derive the connection string at run time by appending _cmd to the key like this:
[core]
sql_alchemy_conn_cmd = bash_command_to_run

How to point to the airflow unittest.cfg?

Airflow creates a unittest.cfg file in the AIRFLOW_HOME environment variable path.
My question is: how can I point to unittest.cfg in the same why that I point to the airflow.cfg via the environment variable AIRFLOW_CONFIG?
The reason why I want to do this is because I don't want to have any config files in the AIRFLOW_HOME directory.
Also, if anyone knows better, could you please explain what is the unittest.cfg is for as there is no documentation I could find on it.
unittest.cfg test configuration file is the default configuration file used when Airflow is running in test mode.
Test mode can be activated by setting the unit_test_mode configuration option in airflow.cfg or AIRFLOW__CORE__UNIT_TEST_MODE environment variable to True .
The configuration values in test configuration file overwrite those in airflow.cfg in runtime when test mode is activated.
# Source: https://github.com/apache/airflow/blob/1.10.5/airflow/configuration.py#L558,L561
def get_airflow_test_config(airflow_home):
if 'AIRFLOW_TEST_CONFIG' not in os.environ:
return os.path.join(airflow_home, 'unittests.cfg')
return expand_env_var(os.environ['AIRFLOW_TEST_CONFIG'])
The AIRFLOW_TEST_CONFIG environment variable can be set to the path of your test configuration file.

Symfony 2 pdo exception

I'm using Symfony 2 and have this row in my parameters.ini:
database_driver = pdo_pgsql
When I was creating database structure with Doctrine everything was good. But if I want to add some doctrine object to my darabase (insert row), I catch an exception:
What I have to do with this?
Are you sure you're using pdo_pgsql? Are you running on localhost? It might be very certain that you are using pdo_mysql driver instead.
However you have to check the following:
php.ini
extension=pdo.so
extension=pdo_mysql.so
or in your case
extension=pdo.so
extension=pdo_pgsql.so
You can check the phpinfo(); to find out the configured database driver.
In your symfony project you have to check the parameters.ini file in config folder. E.g.
[parameters]
database_driver="pdo_mysql"
database_host="localhost"
Besides try to avoid this error
'stty' is not recognized as an internal or external command,
operable program or batch file.
https://github.com/symfony/symfony/issues/4974
First of all, verify your php.ini file: the extensions php_pdo_pgsql and php_pdo must be enabled. Make sure you apply this changes on php.ini file that your symfony project is using, check this on localhost/path_to_your_project/web/config.php. You know if this extensions are enabled executing the function phpinfo().
This command is also helpfull: php -m. It lists on console all the php modules that are loaded.
Tip: check out you Apache error log, there could be something wrong with the load of your extensions. This file is located according to your server configuration.

Path variable not being set with new values

I have the following at the end of my script:
export PATH=/usr/openwin/bin:/opt/plat/AUTOSYS/4.0/autosys/bin:/usr/kerberos/bin::/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/opt/netezzaClient/bin:/xenv/ant/X/1.8.0/bin:/export/opt/jdk/1.6.0_16/bin:$PATH
export JAVA_HOME=/export/opt/jdk/1.6.0_16
echo "END PATH - $PATH"
which prints this.
END PATH - /usr/openwin/bin:/opt/plat/AUTOSYS/4.0/autosys/bin:/usr/kerberos/bin::/usr/local/bin:/bin:/usr/bin:/usr/X11R6/bin:/opt/netezzaClient/bin:/xenv/ant/X/1.8.0/bin:/export/opt/jdk/1.6.0_16/bin:/opt/edtsesn/share/bin:/xenv/cvs/sun4/5.6p4/1.10/bin:/xenv/rationalrose/sun4/5.x/6.0.9242/rose/bin:/opt/netscape/4.70_B2/bin:/opt/SCssh/3.4_C0/bin:/opt/PDolvwm/bin:/usr/kerberos/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin:/usr/ucb:/usr/bin:/usr/local/etc:/bin:/usr/local/bin:/etc:/software/scripts:/usr/5bin:/usr/demo:/usr/openwin/bin:/usr/tran/sparc/lib:/usr/ccs/bin:/opt/sybase/1192/bin:/tmp/wm40824:/opt/edtsdba/bin:/xenv/scripts/bin:/xenv/workshop/sun4/5.8mu4/6.1a/bin:/home/pj03962/1192/bin:/home/pj03962/1192/bin:/xenv/java/X/1.6.0_11/bin:/xenv/cvs/:/xenv/java/X/1.6.0_11/bin:/xenv/cvs/sun4/5.6p4/1.10/bin:/xenv/ant/sun4/5.x/1.6.2/bin:/opt/SCssh/3.7.1_C0/bin:/opt/xemacs/bin:/home/pj03962/125/OCS-12_5/bin:/home/pj03962/125/125/bin:/opt/perforce/bin:/opt/netezzaClient/bin:/opt/netezzaClient/bin
yet
bash-3.00$ env $PATH
env: /usr/kerberos/bin:/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin:/usr/X11R6/bin: No such file or directory
No such file comes because a /home/usr/ folder does not exist for my account. But this still does not shed any light as to why it has not added the other values to the path variable.
I guess you've called your script without sourcing it.
For example, if your script is named "myscript.sh", you may have called "./myscript.sh" or "bash myscript.sh". Your modifications of env var inside the script won't leak out the script, you need to source it (call it with 'source' or '.' first).
eg:
. ./myscript.sh
The changes in myscript.sh will modify your current environment.
For the "env $PATH" : I think it's an error, since you're trying to run the command in the "PATH" variable. Which doesn't exist (/usr/kerberos/bin:/bin:... isn't the name of an existing file on your system !).
You should use instead: echo $PATH

Resources