Is there any CLI way to show information about Glassfish JDBC connection pool? - glassfish-3

The only relevant command that I found is:
NAME
list-jdbc-connection-pools - lists all JDBC connection pools
EXAMPLES
This example lists the existing JDBC connection pools.
asadmin> list-jdbc-connection-pools
sample_derby_pool
__TimerPool
Command list-jdbc-connection-pools executed successfully.
What I want is to display the information about particular connection pool. Such as:
asadmin desc-jdbc-connection-pool sample_derby_pool
name: sample_derby_pool
databaseName: oracle
portNumber: 1521
serverName: test
user: testUser
...

Try running:
asadmin get * | more
The above command will display all GlassFish attributes. Pipe it to grep to get just the pool properties you are interested in:
asadmin get * | grep TimerPool
Hope this helps.

Related

[DataDirect][ODBC lib] Driver Manager Message file not found. Please check for the value of InstallDir in your odbc.ini in Informatica

I am using informatica, I have Singlestore DB which I am trying to connect.
I am able to login to singelstore DB using Singlestore ODBC Driver as below.
Singlestore version:8.0.5
SS ODBC Driver version: 1.1.1
Singlestore is self managed.
[abc#rnd-2 ~]$ isql SingleStore-server
+---------------------------------------+
| Connected! |
| |
| sql-statement |
| help [tablename] |
| quit |
| |
+---------------------------------------+
SQL> ^C
While I am trying to connect informatica with Singlestore using ODBC Connection, I am gettion error:
Message Code: WRT_8001
Message: Error connecting to database...
WRT_8001 [Session s_test Username dev DB Error -1
[DataDirect][ODBC lib] Driver Manager Message file not found. Please check for the value of InstallDir in your odbc.ini.
Database driver error...
Function Name : Connect
Database driver error...
Function Name : Connect
Database Error: Failed to connect to database using user [dev] and connection string [SingleStore-server].]Message Code: WRT_8001
Message: Error connecting to database...
WRT_8001 [Session s_test Username dev DB Error -1
[DataDirect][ODBC lib] Driver Manager Message file not found. Please check for the value of InstallDir in your odbc.ini.
Database driver error...
Function Name : Connect
Database driver error...
Function Name : Connect
Database Error: Failed to connect to database using user [dev] and connection string [SingleStore-server].]
My location of odbc.ini file: /etc/odbc.ini
odbc.ini
[SingleStore_server]
Description=SingleStore server
Driver=/home/abc/singlestore-connector-odbc-1.1.1-centos7-amd64/libssodbca.so
SERVER=<>
USER=<>
PASSWORD=<>
DATABASE=<>
PORT=<>
I added path in .bash_profile, but still getting same error:
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/bin
export PATH
export ODBCINI=/etc/odbc.ini
Pls let me know how to resolve this error.
Ref link: https://knowledge.informatica.com/s/article/577839?language=en_US
https://knowledge.informatica.com/s/article/Error-connecting-to-database-DataDirect-ODBC-lib-Driver-Manager-Message-file-not-found-Please-check-for-the-value-of-InstallDir-in-your-odbc-ini-while?language=en_US
https://docs.singlestore.com/managed-service/en/developer-resources/connect-with-application-development-tools/connect-with-odbc/the-singlestore-odbc-driver.html
Reg export ODBCINI=/etc/odbc.ini, I have seen Informatica always use their ODBC drivers. Can you please check if you have single store drivers available in /<INFA_HOME>/ODBCX.version/odbc.ini​ file? If yes, i highly recommend to use it.
If yes, please see if you can test the ODBC driver with Infa provided tool $INFA_HOME/tools/debugtools/ssgodbc -d dsn -u username -p password [-v] against your DB. This will ensure you have no issues with ODBC setup.
You can find all about this here link.
If no, then, pls make sure you have installed correct version single store ODBC drivers (32 or 64 bit) and Informatica user have RWX permission on them. Then,
Add the driver path to LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:$HOME/server_dir:$ODBCHOME/lib;
set ODBCINI=/etc/odbc.ini
grant access - chmod 777 /etc/odbc.ini
see if the tool ssgodbc is able to establish connection.
Please see the following examples of integrating SingleStore data with Informatica:
JDBC - https://www.cdata.com/kb/tech/singlestore-jdbc-informatica-cloud.rst
ODBC - https://www.cdata.com/kb/tech/singlestore-odbc-informatica.rst

Euca 5.0 Ansible Console Task Failing

Background:
I am only able to get past the ansible console install/config tasks by adding --region localhost to anywhere in: /usr/share/eucalyptus-ansible/roles/cloud-post/tasks/console.yml wherever it calls tools that take that argument.
Otherwise each sub task fails like this: ["euca-describe-images: error: connection error (('Connection aborted.', gaierror(-2, 'Name or service not known')))"]
Running the commands from that playbook directly on the euca server being configured gives the same result unless I specify --region localhost
Problem:
I'm stuck here: [cloud-post : update console route53 system domain for eucalyptus-cloud authentication]
Error: "euform-update-stack: error (ValidationError): No updates are to be performed.", "stderr_lines": ["euform-update-stack: error (ValidationError): No updates are to be performed."]
All services are running except the ImagingBackend is Not Ready
No instances are running according to euca-describe-instances
Images are available:
IMAGE ami-5be483c81cf8bd65c eucalyptus-console-image-5-0-823/eucalyptus-console-image-5-0-823.raw.manifest.xml 000216594841 available private x86_64 machine instance-store hvm
TAG image ami-5be483c81cf8bd65c type eucalyptus-console-image
TAG image ami-5be483c81cf8bd65c version 5.0.823
IMAGE ami-f31092ddb73e29af9 eucalyptus-service-image-v5.0.100/eucalyptus-service-image.raw.manifest.xml 000216594841 available privatx86_64 machine instance-store hvm
TAG image ami-f31092ddb73e29af9 provides imaging,loadbalancing
TAG image ami-f31092ddb73e29af9 type eucalyptus-service-image
TAG image ami-f31092ddb73e29af9 version 5.0.100
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
node:
hosts:
exp-enc-[01:02].lan.com:
EDIT:
Solved. Details are in the comments of the marked answer.
The name error most likely means that DNS for the domain cloud.lan.com is not being correctly delegated to your deployment. To test this, check if the nameserver is found:
dig +short NS cloud.lan.com
you should see "ns1.cloud.lan.com" and then should be able to use that nameserver to resolve services, e.g.
dig +short ec2.cloud.lan.com #ns1.cloud.lan.com
which should be the IP of the host for the compute service.
The second item is a bug in the ansible playbook that occurs when the stack is already present and up to date. To work around it, you can either update your playbook or delete the stack before running the playbook. Depending on how far the playbook progressed you may have a script to do this:
/usr/local/bin/console-manage-stack -a delete
the related playbook change is https://github.com/AppScale/ats-deploy/pull/36

Percona tool pt-table-checksum does not return results

I have a MariaDB (10.4.14) Master-Slave configuration and I want to use Percona pt-table-checksum. I have installed Percona Toolkit in the Master host, which includes pt-table-checksum 3.2.1.
I have created a user to run pt-table-checksum and granted his privileges:
GRANT REPLICATION SLAVE,PROCESS,SUPER, SELECT ON *.* TO `checksum_user`#'%' IDENTIFIED BY 'checksum_password';
GRANT ALL PRIVILEGES ON percona.* TO `checksum_user`#'%';
However, when I try to run the tool, I always get the following error:
pt-table-checksum --replicate=percona.checksums --ignore-databases mysql --no-check-binlog-format h=localhost, u=checksum_user, p=checksum_password
Usage: pt-table-checksum [OPTIONS] [DSN]
Errors in command-line arguments:
* More than one host specified; only one allowed
Instead of using the DSN, I have also tried the options --host,--user and --password, but the results are the same.
What am I doing wrong?

Oracle 11g XE installation on docker RHEL 7 image

While installing oracle 11g XE on docker i am getting the error.
Following are the output:-
/etc/init.d/oracle-xe configure
Oracle Database 11g Express Edition Configuration
This will configure on-boot properties of Oracle Database 11g Express
Edition. The following questions will determine whether the database should
be starting upon system boot, the ports it will use, and the passwords that
will be used for database accounts. Press to accept the defaults.
Ctrl-C will abort.
Specify the HTTP port that will be used for Oracle Application Express [8080]:8080
Specify a port that will be used for the database listener [1521]:1521
Specify a password to be used for database accounts. Note that the same
password will be used for SYS and SYSTEM. Oracle recommends the use of
different passwords for each database account. This can be done after
initial configuration:
Confirm the password:
Do you want Oracle Database 11g Express Edition to be started on boot (y/n) [y]:y
Starting Oracle Net Listener...Done
Configuring database...
Database Configuration failed. Look into /u01/app/oracle/product/11.2.0/xe/config/log for details
[root#b7c63c4e1da8 Disk1]# cd /u01/app/oracle/product/11.2.0/xe/config/log
[root#b7c63c4e1da8 log]# ls
CloneRmanRestore.log cloneDBCreation.log postDBCreation.log postScripts.log
[root#b7c63c4e1da8 log]# cat CloneRmanRestore.log
ORA-00845: MEMORY_TARGET not supported on this system
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
declare
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
select TO_CHAR(systimestamp,'YYYYMMDD HH:MI:SS') from dual
*
ERROR at line 1:
ORA-01034: ORACLE not available
Process ID: 0
Session ID: 0 Serial number: 0
One of the possible solution that I got was to mount the temp file to provide extra space to it which only contains 6GB approx in the docker. But i am unable to mount the memory in docker.
Got the solution for the same :-
we have to modify the files init.ora and initXETemp.ora at the path /u01/app/oracle/product/11.2.0/xe/config/scripts
with the values :-
###########################################
# Miscellaneous
###########################################
compatible=11.2.0.0.0
diagnostic_dest=/u01/app/oracle
#memory_target=1073741824
pga_aggregate_target=200540160
sga_target=601620480
You may encounter
ORA-00845: MEMORY_TARGET not supported on this system
when starting Oracle DB in an unprivileged container. Try running the container with the --privileged flag, e.g.
docker run --name oracle12 --hostname oracledb --privileged local/oracle12:12.1.0.2

~/.ssh/id_rsa.pub not found error while installing capistrano as ansible playbook

I try to install https://github.com/roots/bedrock-ansible to get a bedrock deployment (http://roots.io/wordpress-stack/) running.
When I run "vagrant up", after some time I get the error:
TASK: [capistrano-setup | Setup deploy group] *********************************
skipping: [default]
TASK: [capistrano-setup | Setup deploy user] **********************************
skipping: [default]
TASK: [capistrano-setup | Adding public key to server] ************************
fatal: [default] => could not locate file in lookup: ~/.ssh/id_rsa.pub
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/johannes/site.retry
default : ok=46 changed=16 unreachable=1 failed=0
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
I do not have a clou how i can fix this. Do you have an idea?
It seems the role is trying to find your local public key. It should be in the location in the error message '~/.ssh/id_rsa.pub', but it's not. So either you don't have one, or you keep it in another location.
If you're not familiar with generating SSH keys you probably don't have one. I personally like the GitHub help page for this: https://help.github.com/articles/generating-ssh-keys/
(you only have to perform steps 1 and 2).
If you do have SSH keys, but in a different location, the capistrano-install role in bedrock uses some variables:
deploy_user: deploy
deploy_keys:
- "~/.ssh/id_rsa.pub"
So you can set (multiple) public key files in the deploy_keys list and they will be added to the deploy_user's authorized keys.
All this is needed because Capistrano will use the deploy user to connect to the remote server later. http://blakesmith.me/2010/02/08/understanding-public-key-private-key-concepts.html

Resources