ODBC Data source name not found, and no default driver specified - odbc

(Environment is Linux CentOs7)
I'm fiddling with odbc.ini file and testing the connection each time I make a change:
isql -v gdHive
[IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified
[ISQL]ERROR: Could not SQLConnect
Within odbc.ini file I've been trying different variations of the description name value pair. This originally was set to:
# Description: DSN Description.
Description=Hortonworks Hive ODBC Driver (64-bit) DSN
I tried changing this to
Description=gdHive
This did not change anything wrt the error I pasted above.
Here's the whole config, it's my first time setting this up so I'm not sure what I've missed:
# HS2 service discovery with ZooKeeper (ServiceDiscoveryMode=1).
ZKNamespace=/hive/hiveserver2
# Set to 1 if you are connecting to Hive Server 1. Set to 2 if you are connecting to Hive Server 2.
HiveServerType=2
# The authentication mechanism to use for the connection.
# Set to 0 for No Authentication
# Set to 1 for Kerberos
# Set to 2 for User Name
# Set to 3 for User Name and Password
# Note only No Authentication is supported when connecting to Hive Server 1.
AuthMech=1
# The Thrift transport to use for the connection.
# Set to 0 for Binary
# Set to 1 for SASL
# Set to 2 for HTTP
# Note for Hive Server 1 only Binary can be used.
ThriftTransport=1
# When this option is enabled (1), the driver does not transform the queries emitted by an
# application, so the native query is used.
# When this option is disabled (0), the driver transforms the queries emitted by an application and
# converts them into an equivalent from in HiveQL.
UseNativeQuery=0
# Set the UID with the user name to use to access Hive when using AuthMech 2 to 8.
UID=
# The following is settings used when using Kerberos authentication (AuthMech 1 and 10)
# The fully qualified host name part of the of the Hive Server 2 Kerberos service principal.
# For example if the service principal name of you Hive Server 2 is:
# hive/myhs2.mydomain.com#EXAMPLE.COM
# Then set KrbHostFQDN to myhs2.mydomain.com
KrbHostFQDN=hive.hadoop.p3.int.example.com
# The service name part of the of the Hive Server 2 Kerberos service principal.
# For example if the service principal name of you Hive Server 2 is:
# hive/myhs2.mydomain.com#EXAMPLE.COM
# Then set KrbServiceName to hive
KrbServiceName=hive
# The realm part of the of the Hive Server 2 Kerberos service principal.
# For example if the service principal name of you Hive Server 2 is:
# hive/myhs2.mydomain.com#EXAMPLE.COM
# Then set KrbRealm to EXAMPLE.COM
KrbRealm=HADOOP.PROD.INT.EXAMPLE.COM
# Set to 1 to enable SSL. Set to 0 to disable.
SSL=0
# Set to 1 to enable two-way SSL. Set to 0 to disable. You must enable SSL in order to
# use two-way SSL.
TwoWaySSL=0
# The file containing the client certificate in PEM format. This is required when using two-way SSL.
ClientCert=
# The client private key. This is used for two-way SSL authentication.
ClientPrivateKey=
# The password for the client private key. Password is only required for password protected
# client private key.
ClientPrivateKeyPassword=
The error message only says "Data source name not found, and no default driver specified". I'm not sure how to set this data source name or choose default driver?

The "Description" keyword's value is entirely cosmetic in an ODBC DSN. It has no functional use. You can set it to any arbitrary string.
ODBC will be very hard to learn by trial-and-error shots in the dark. I suggest you take a look at the official ODBC specification. You might also look at some docs for the Driver Manager in your environment, such as this for iODBC, or this for unixODBC.

Related

RegisterRext.exe errors when trying to configure R with SQL Server 2022

I have been following the steps at this site to configure R 4.2.2 to work with a named instance of SQL Server 2022:
https://learn.microsoft.com/en-us/sql/machine-learning/install/sql-machine-learning-services-windows-install-sql-2022?view=sql-server-ver16&viewFallbackFrom=sql-server-ver16%27
I installed RTools, R, Microsoft R client, and installed the R packages specified on the link above. When I try to configure RegisterRext.exe with this command:
.\RegisterRext.exe /configure /rhome:"%ProgramFiles%\R\R-4.2.2" /instance:"SQL2022"
I get this error:
Wrong or missing command
Error: Wrong or missing command.
Usage: RegisterRExt.exe <command> [/instance:name] [/user:username] [/password:*|password] [/database:databasename].
The commands are :
/install - copies the launcher binaries to the right location for a SQL server instance.
/uninstall - removes the launcher binaries from the SQL server instance.
/installpkgmgmt - installs package management support for given SQL server instance and database also if specified.
/uninstallpkgmgmt - uninstalls package management support from given SQL server instance and database also if specified.
/installrts - installs real time scoring support for given SQL server instance and database.
/uninstallrts - uninstalls real time scoring support for given SQL server instance and database.
Arguments are :
/instance:value is an optional parameter, the <value> specified will be used as the instance name else the commands are performed on the default instance.
/user:username is an optional parameter, the <username> specified will be used to connect via the SQL authentication mode. Note that once this parameter is specified, /password:<value> is a required parameter.
/password:*|<password> is a required parameter once /user parameter is provided. Use /password:* to type the password when asked to by the tool instead of providing in clear as a parameter. Note that providing the password without the user is ignored.
/appcontainer is an optional parameter, if specified indicates that appcontainer feature is enabled. Not specified by default.
/flavor is an optional parameter, if specified indicates what kind of installation flavor it is. By default, its value is sql
Windows Authentication is used by default when no user and password is supplied
How do you configure the R runtime with SQL Server?

ODBC 18 vs ODBC 17 in Windows Data Source Manager

I have a microsoft access front end connecting to a SQL database for the backend. I have been using this setup for the last 4 years and I have recently run into issues with new associates not being able to use the tool due to our company retiring ODBC driver 17 from our internal systems. I don't understand what is the difference between ODBC Driver 17 and 18 that would cause version 18 to fail.
How the driver is used:
in ODBC Data source manager a manual link to our database is created. The associate enters a specific name for the link "Our_link" and in the Driver name it states "ODBC Driver 17 for SQL Server"
Then inside of our access front end we link to that driver like so:
Const ConStrSQL As String = "DRIVER={ODBC Driver 17 for SQLServer};Server=OurServer;Database=Our_DB;UID=User();Trusted_Connection=Yes;"
The issue I am having is when I try to create the ODBC connection in the data source administrator using ODBC driver 18 I get an error that states:
[![`"Connection Failed: The certificate chain was issued by an authority that is not trusted"`]
Not sure if this extra information would help but I also see the following:
SQLState: 08001
SQL Server Error -2146893019
Client unable to establish connection
Is this something I need to reach out to our database admin group and ask if they installed driver 18 on the server side?
I'm guessing it has to do with the changes to the encryption behavior with version 18, specifically that encryption is required by default. The recommended fix is to install a trusted certificate on your server[1], but if you don't want to deal with the DB Admins you might be able to still connect by specifying No (or optional) to Encrypt in your connection string.[2]
There is a chance that won't work the server is set to Force Encryption, but it sounds like the change is all on the client end. Ideally you would want the encryption working all the time, so if you are using a self-signed certificate add the public key from the SQL server to trusted certificates on the client machines.
[1]: https://techcommunity.microsoft.com/t5/sql-server-blog/odbc-driver-18-0-for-sql-server-released/ba-p/3169228)
[2]: https://learn.microsoft.com/en-us/sql/connect/odbc/dsn-connection-string-attribute?view=sql-server-ver16#encrypt
The 'fix' that was found with the help of one of my database admins is as follows:
In the data source manager there is an option to select that states "Trust Server Certificate"
Once that option is selected I was able to complete the rest of my DSM connection. One thing to note is I was receiving the previous error when trying to change the DEFAULT DATABASE option. The checkbox to "trust server certificate" is on the screen after that.. so I had to skip choosing my default database, check the box, then go back and select my default database for everything to work.
I haven't completed all my testing in Access to make sure everything works 100%, but my quick testing is very promising.

Does Spring MVC/Rest have its own default connection pooling?

I have developed a Spring Rest Application. I have not added code for any specific connection pooling. I was considering HikariCP, boneCP or cp30.
Someone told me that Spring MVC/Rest has its own internal connection pooling so even if I don't implement any of my own, there would not be any issues.
I tried finding on the internet but has some doubt. Please clarify.
No , Spring has not default connection pooling , It use web server [Tomcat] thread request handling , and heavily using Threadlocal but for database operation you can use any of them as your wish , spring boot using common pooling as default but adding other configurations override the others .
Tomcat in spring boot:
server.tomcat.accept-count= # Maximum queue length for incoming connection requests when all possible request processing threads are in use.
server.tomcat.accesslog.buffered=true # Buffer output such that it is only flushed periodically.
server.tomcat.accesslog.directory=logs # Directory in which log files are created. Can be relative to the tomcat base dir or absolute.
server.tomcat.accesslog.enabled=false # Enable access log.
server.tomcat.accesslog.file-date-format=.yyyy-MM-dd # Date format to place in log file name.
server.tomcat.accesslog.pattern=common # Format pattern for access logs.
server.tomcat.accesslog.prefix=access_log # Log file name prefix.
server.tomcat.accesslog.rename-on-rotate=false # Defer inclusion of the date stamp in the file name until rotate time.
server.tomcat.accesslog.request-attributes-enabled=false # Set request attributes for IP address, Hostname, protocol and port used for the request.
server.tomcat.accesslog.rotate=true # Enable access log rotation.
server.tomcat.accesslog.suffix=.log # Log file name suffix.
server.tomcat.additional-tld-skip-patterns= # Comma-separated list of additional patterns that match jars to ignore for TLD scanning.
server.tomcat.background-processor-delay=30 # Delay in seconds between the invocation of backgroundProcess methods.
server.tomcat.basedir= # Tomcat base directory. If not specified a temporary directory will be used.
server.tomcat.internal-proxies=10\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|\\
192\\.168\\.\\d{1,3}\\.\\d{1,3}|\\
169\\.254\\.\\d{1,3}\\.\\d{1,3}|\\
127\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.1[6-9]{1}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.2[0-9]{1}\\.\\d{1,3}\\.\\d{1,3}|\\
172\\.3[0-1]{1}\\.\\d{1,3}\\.\\d{1,3} # regular expression matching trusted IP addresses.
server.tomcat.max-connections= # Maximum number of connections that the server will accept and process at any given time.
server.tomcat.max-http-post-size=0 # Maximum size in bytes of the HTTP post content.
server.tomcat.max-threads=0 # Maximum amount of worker threads.
server.tomcat.min-spare-threads=0 # Minimum amount of worker threads.
server.tomcat.port-header=X-Forwarded-Port # Name of the HTTP header used to override the original port value.
server.tomcat.protocol-header= # Header that holds the incoming protocol, usually named "X-Forwarded-Proto".
server.tomcat.protocol-header-https-value=https # Value of the protocol header that indicates that the incoming request uses SSL.
server.tomcat.redirect-context-root= # Whether requests to the context root should be redirected by appending a / to the path.
for Data Source :
# DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.continue-on-error=false # Do not stop if an error occurs while initializing the database.
spring.datasource.data= # Data (DML) script resource references.
spring.datasource.data-username= # User of the database to execute DML scripts (if different).
spring.datasource.data-password= # Password of the database to execute DML scripts (if different).
spring.datasource.dbcp2.*= # Commons DBCP2 specific settings
spring.datasource.driver-class-name= # Fully qualified name of the JDBC driver. Auto-detected based on the URL by default.
spring.datasource.generate-unique-name=false # Generate a random datasource name.
spring.datasource.hikari.*= # Hikari specific settings
spring.datasource.initialize=true # Populate the database using 'data.sql'.
spring.datasource.jmx-enabled=false # Enable JMX support (if provided by the underlying pool).
spring.datasource.jndi-name= # JNDI location of the datasource. Class, url, username & password are ignored when set.
spring.datasource.name=testdb # Name of the datasource.
spring.datasource.password= # Login password of the database.
spring.datasource.platform=all # Platform to use in the schema resource (schema-${platform}.sql).
spring.datasource.schema= # Schema (DDL) script resource references.
spring.datasource.schema-username= # User of the database to execute DDL scripts (if different).
spring.datasource.schema-password= # Password of the database to execute DDL scripts (if different).
spring.datasource.separator=; # Statement separator in SQL initialization scripts.
spring.datasource.sql-script-encoding= # SQL scripts encoding.
spring.datasource.tomcat.*= # Tomcat datasource specific settings
spring.datasource.type= # Fully qualified name of the connection pool implementation to use. By default, it is auto-detected from the classpath.
spring.datasource.url= # JDBC url of the database.
spring.datasource.username= # Login user of the database.
spring.datasource.xa.data-source-class-name= # XA datasource fully qualified name.
spring.datasource.xa.properties= # Properties to pass to the XA data source.

Can't Connect with Alfresco repo via aos

You are my last hope... I have been looking for answers a long time ago.
I my trying to connect alfresco repo via aos. I push "Map network drive..." and i fill the textFields. The strange part of my story is that I can connect locally but i can't remotely. I have turned off my vm server firewall, just for case, but nothing.
I have edited alfresco-global.propertries, which I have append the following:
###############################
## Common Alfresco Properties #
###############################
dir.root=C:/ALFRES~1/alf_data
alfresco.context=alfresco
alfresco.host=127.0.0.1
alfresco.port=8080
alfresco.protocol=https
share.context=share
share.host=127.0.0.1
share.port=8080
share.protocol=https
### database connection properties ###
db.driver=org.postgresql.Driver
db.username=alfresco
db.password=root
db.name=alfresco
db.url=jdbc:postgresql://localhost:5432/${db.name}
# Note: your database must also be able to accept at least this many connections. Please see your database documentation for instructions on how to configure this.
db.pool.max=275
db.pool.validate.query=SELECT 1
# The server mode. Set value here
# UNKNOWN | TEST | BACKUP | PRODUCTION
system.serverMode=UNKNOWN
### FTP Server Configuration ###
ftp.port=21
ftp.enabled=true
### RMI registry port for JMX ###
alfresco.rmi.services.port=50500
avm.rmi.service.port=0
avmsync.rmi.service.port=0
attribute.rmi.service.port=0
authentication.rmi.service.port=0
repo.rmi.service.port=0
action.rmi.service.port=0
deployment.rmi.service.port=0
### External executable locations ###
ooo.exe=C:/ALFRES~1/LIBREO~1/App/libreoffice/program/soffice.exe
ooo.enabled=true
ooo.port=8100
img.root=C:\\alfresco-community\\imagemagick
img.coders=${img.root}\\modules\\coders
img.config=${img.root}
img.gslib=${img.root}\\lib
img.exe=${img.root}\\convert.exe
jodconverter.enabled=false
jodconverter.officeHome=C:/ALFRES~1/LIBREO~1/App/libreoffice
jodconverter.portNumbers=8100
### Initial admin password ###
alfresco_user_store.adminpassword=329153f560eb329c0e1deea55e88a1e9
### E-mail site invitation setting ###
notification.email.siteinvite=false
### License location ###
dir.license.external=C:/ALFRES~1
### Solr indexing ###
index.subsystem.name=solr
dir.keystore=${dir.root}/keystore
solr.host=localhost
solr.port.ssl=8443
##solr.port.ssl=443
### Allow extended ResultSet processing
security.anyDenyDenies=false
### Smart Folders Config Properties ###
smart.folders.enabled=false
### Remote JMX (Default: disabled) ###
alfresco.jmx.connector.enabled=false
### CIFS Server Configuration ###
# The tcpipSMB and netBIOSSMB beans have a platforms property that allow their configuration to be targeted to Alfresco servers running on specific platforms. The property is formatted as a comma-separated list of platform IDs. Valid platform IDs are windows,linux,solaris, macosx and aix.
cifs.platform=linux,solaris,macosx,windows
cifs.enabled= true
cifs.serverName=${localname}
cifs.broadcast=192.168.0.255
#An empty value indicates bind to all available network adapter
cifs.bindto=192.168.0.85
cifs.ipv6.enabled=true
cifs.hostannounce=true
cifs.pseudoFiles.enabled=false
#controls whether URL shortcuts or desktop actions are displayed on CIFS.
cifs.pseudoFiles.explorerURL.enabled=false
#Is the URL shortcut for alfresco explorer shown ?
#cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url
#Name of CIFS URL for alfresco explorer
#cifs.pseudoFiles.shareURL.enabled=false
# Enable the use of asynchronous sockets/NIO code
cifs.disableNIO=false
# Disable the use of JNI code. Only currently affects Windows
cifs.disableNativeCode=false
# Session timeout, in seconds. Defaults to 15 minutes i.e. 900 seconds, to match the default Windows client setting. Maximum is 3600.
# If no I/O is received within that time the session is closed by the server
cifs.sessionTimeout=3600
cifs.WINS.autoDetectEnabled=true
cifs.sessionDebug=false
# Can be mapped to non-privileged ports, then use firewall rules to forward requests from the standard ports
cifs.tcpipSMB.port=1445
cifs.netBIOSSMB.sessionPort=1139
cifs.netBIOSSMB.namePort=1137
cifs.netBIOSSMB.datagramPort=1138
### Authentication Chain ###
authentication.chain=ldap1:ldap,alfrescoNtlm1:alfrescoNtlm
ldap.authentication.java.naming.read.timeout=15000
### Sync Settings ###
synchronization.synchronizeChangesOnly=true
synchronization.syncOnStartup=true
synchronization.syncWhenMissingPeopleLogIn=true
synchronization.import.cron=0 0 * * * ?
### SharePoint settings ###
vti.server.port=7070
vti.server.external.host=shp.parkmill.splatcooking.net
vti.server.external.port=443
vti.server.external.protocol=https
vti.alfresco.alfrescoHostWithPort=https://alfresco.parkmill.splatcooking.net:443
vti.share.shareHostWithPort=https://alfresco.parkmill.splatcooking.net:443
vti.share.shareContext=/share
### replication ###
replication.enabled=true
### orphan removal ###
system.content.orphanProtectDays=1
alfresco.authentication.authenticationCIFS=true
Also I can connect via Browser but no hope with network drive mapping. What is causing this problem and how can I fix it?
Thx for your precious time.
I am waiting for your answers
I believe your problem is the alfresco.host setting. You can connect locally, because it's looking at 127.0.0.1 (localhost). Your remote machine is trying to use 127.0.0.1 (which is local to that machine) and failing.
To fix this, update the alfresco.host and share.host properties to the IP or DNS name for the server on which Alfresco is installed.

boxfuse dev db not provisioned correctly

I'm just starting with boxfuse and can't seem to find a way to get my dev database to be provisioned.
In my boxfuse.yml I have (for the database section):
database:
# the name of your JDBC driver
driverClass: com.mysql.jdbc.Driver
# the username
user: root
# the password
password: <password>
# the JDBC URL
url: jdbc:mysql://10.0.0.84:3306/dmsdb
# any properties specific to your JDBC driver:
properties:
charSet: UTF-8
hibernate.dialect: org.hibernate.dialect.MySQLInnoDBDialect
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "/* MyApplication Health Check */ SELECT 1"
# the minimum number of connections to keep open
minSize: 8
# the maximum number of connections to keep open
maxSize: 32
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
If I try running it (boxfuse run), my application doesn't work at all.
boxfuse info produces the following:
oxfuse client v.1.18.7.938
Copyright 2016 Boxfuse GmbH. All rights reserved.
Account: mlr11 (mlr11)
Info about mlr11/dms-service in the dev environment:
App Type : Single Instance with Zero Downtime updates
App URL : http://127.0.0.1:8082
DB Type : MySQL database
DB URL : jdbc:mysql://localhost:3306/boxfuse-dev-db
DB Host : localhost
DB Port : 3306
DB Database : boxfuse-dev-db
DB User : boxfuse-dev-db
DB Password : boxfuse-dev-db
DB Status : available
Which is very different than what I was expecting. URL, Database, User, Password) are not matching my boxfuse.yml file.
What I am missing. I know it must be something simple. I did all kind of search and read the doc a few times. I can't seem to find what's wrong. Any pointers will be appreciated.
From the config file you posted I am assuming this is a dropwizard app.
Since your Boxfuse app was configured to use a MySQL database, Boxfuse automatically provisions a database in each environment when you first deploy your application there. In your case you can see the connection info for that database in the dev environment in the output you post in your question.
Boxfuse exposes these values (db url, user, password, ...) as environment variables (https://cloudcaptain.sh/docs/databases#envvars) and automatically configures your framework (Dropwizard I assume) to use those instead of the ones included in your config file. It will do so by passing -Ddw.database.url=$BOXFUSE_DATABASE_URL -Ddw.database.user=$BOXFUSE_DATABASE_USER -Ddw.database.password=$BOXFUSE_DATABASE_PASSWORD as arguments to the JVM.
Also doublecheck in the VirtualBox GUI that your VirtualBox installation is fully functional and able to start VMs and that both the Boxfuse Dev VM and the instance of your application are started properly.

Resources