I am building a R script in which I need to connect to MongoDB through authentication and process the data fetched from database using rmongodb package.For that I have created a new MongoDB user in version 3.0.4 and while connecting to mongoDB from R script authentication fails.
Also the user is authenticated successfully through mongo shell.
Also authentication works fine while I authenticate user created in MongoDB version 2.x.
Following is code snippet which we have used in R script to connect to Mongo database.
mongo <- mongo.create("127.0.0.1", "", "user", "pass", "db", 0L )
While executing above code snippet we receive the following response
error: Loading required package: rmongodb Authentication failed.
Please suggest me appropriate solution to authentication failure issue in rmongodb package.
rmongodb (as at 1.8.0) uses a legacy MongoDB C driver which doesn't have full support for MongoDB 3.0 yet. In particular, it will not support using the new SCRAM-SHA-1 default authentication or optional WiredTiger storage engine.
There's an rmongodb issue in Github tracking this: Compatibility with version 3.0 of MongoDB.
Until rmongodb is updated your options (in order of least to most hasslesome) include:
use a different driver which does have MongoDB 3.x support (i.e. RMongo 0.1.0 or newer)
use MongoDB 2.6
use MongoDB 3.x but downgrade to the older MONGO-CR auth (and do not use WiredTiger or any alternative storage engines)
Having just gone through this myself, I thought I'd add my two cents in case it helps someone.
#Stennie is right on target to with the authentication stuff. So if you want to use mongo 3 the way to get it going is as follows (this is from a ubuntu install).
1) sudo nano /etc/mongod.conf
2) Comment out the "auth=TRUE" line
3) sudo service mongod restart
4) login to mongo shell (now with no authentication so everything is open)
5) use admin
6) Execute the following:
var schema = db.system.version.findOne({"_id" : "authSchema"})
schema.currentVersion = 3
db.system.version.save(schema)
(the above 3 commands are from here: https://jira.mongodb.org/browse/SERVER-17459)
7) create your users in the appropriate database
8) to make sure the right credentials are set up, type db.system.users.find() and amke sure they have the MONGODB-CR credentials
9) quit mongo
10) ucomment out the authentication line in /etc/mongod.conf
11) restart mongodb using sudo service mongod restart
should work now! I hope that helps someone...
Related
We need to connect to on premise Teradata from Azure Databricks .
Is that possible at all ?
If yes please let me know how .
I was looking for this information as well and I recently was able to access our Teradata instance from Databricks. Here is how I was able to do it.
Step 1. Check your cloud connectivity.
%sh nc -vz 'jdbcHostname' 'jdbcPort'
- 'jdbcHostName' is your Teradata server.
- 'jdbcPort' is your Teradata server listening port. By default, Teradata listens to the TCP port 1025
Also check out Databrick’s best practice on connecting to another infrastructure.
Step 2. Install Teradata JDBC driver.
Teradata Downloads page provides JDBC drivers by version and archive type. You can also check the Teradata JDBC Driver Supported Platforms page to make sure you pick the right version of the driver.
Databricks offers multiple ways to install a JDBC library JAR for databases whose drivers are not available in Databricks. Please refer to the Databricks Libraries to learn more and pick the one that is right for you.
Once installed, you should see it listed in the Cluster details page under the Libraries tab.
Terajdbc4.jar dbfs:/workspace/libs/terajdbc4.jar
Step 3. Connect to Teradata from Databricks.
You can define some variables to let us programmatically create these connections. Since my instance required LDAP, I added LOGMECH=LDAP in the URL. Without LOGMECH=LDAP it returns “username or password invalid” error message.
(Replace the text in italic to the values in your environment)
driver = “com.teradata.jdbc.TeraDriver”
url = “jdbc:teradata://Teradata_database_server/Database=Teradata_database_name,LOGMECH=LDAP”
table = “Teradata_schema.Teradata_tablename_or_viewname”
user = “your_username”
password = “your_password”
Now that the connection variables are specified, you can create a DataFrame. You can also explicitly set this to a particular schema if you have one already. Please refer to Spark SQL Guide for more information.
Now, let’s create a DataFrame in Python.
My_remote_table = spark.read.format(“jdbc”)\
.option(“driver”, driver)\
.option(“url”, url)\
.option(“dbtable”, table)\
.option(“user”, user)\
.option(“password”, password)\
.load()
Now that the DataFrame is created, it can be queried. For instance, you can select some particular columns to select and display within Databricks.
display(My_remote_table.select(“EXAMPLE_COLUMN”))
Step 4. Create a temporary view or a permanent table.
My_remote_table.createOrReplaceTempView(“YOUR_TEMP_VIEW_NAME”)
or
My_remote_table.write.format(“parquet”).saveAsTable(“MY_PERMANENT_TABLE_NAME”)
Step 3 and 4 can also be combined if the intention is to simply create a table in Databricks from Teradata. Check out the Databricks documentation SQL Databases Using JDBC for other options.
Here is a link to the write-up I published on this topic.
Accessing Teradata from Databricks for Rapid Experimentation in Data Science and Analytics Projects
If you create a virtual network that can connect to on prem then you can deploy your databricks instance into that vnet. See https://docs.azuredatabricks.net/administration-guide/cloud-configurations/azure/vnet-inject.html.
I assume that there is a spark connector for terradata. I haven't used it myself but I'm sure one exists.
You can't. If you run Azure Databricks, all the data needs to be stored in Azure. But you can call the data using REST API from Teradata and then save data in Azure.
This question already has answers here:
rmongodb support for MongoDB 3
(2 answers)
Closed 7 years ago.
I'm trying to login using rmongodb and it does not authenticate. Here's my connection string:
myMongoConnection <- mongo.create(host = "<myip>",db = "geoLoc", username = "<myusername>", password = "<mypassword>")
However, if I open a mongo shell on my computer and type:
mongo <myip>/geoLoc -u '<myusername>' -p '<mypassword>'
it connects just fine.
Moreover, if I log into the server and disable authentication by commenting out:
auth = true, and then try:
myMongoConnection <- mongo.create(host = "<myip>",db = "geoLoc)"
it also works fine. So this is something to do with the username and password. I have no idea what though as I know they are "correct" as I can login with them!
You are likely running a server version of MongoDB 3.0 or above ( 3.x series current as writing ), which has an updated security authentication mechanism ( SCRAM-SHA-1 from MONGODB-CR ) that is not compatible with older driver versions that do not support it.
The as of current rmongodb package release ( version 1.8.0 of writing ), this driver is based on the legacy C driver implementation which is not compatible with the new authentication methods. As is also noted in the issues on that repository, the author notes this driver dependency and states that the package would require a rewrite to utilize the new API that supports the new authentication method.
As of writing, there does not appear to be any moves to make any such changes, aside for establishing a new branch which is not presently ready for release.
Your options therefore presently are:
Work without authentication where possible
Downgrade the MongoDB server version to one that supports the old authentication
Look for other driver implementations that support the new authentication.
So "rmongodb" itself cannot currently connect to MongoDB 3.x servers. Either apply one of the other choices, and/or contribute to the respository yourself if you are able to speed it's development into the next version with full authentication suppport.
Other possible driver alternatives are linked or dicussed in the issue linked in this answer.
We have migrated our Oracle database to 12c from 11g.
We have a legacy application running in Java 1.5 and using ojdbc14.jar.
Our application is not able to create connection to database error saying :
java.sql.SQLException: ORA-28040: No matching authentication protocol
I reffered to answer ORA-28040: No matching authentication protocol exception, and tried to upgrade my ojdbc14.jar to ojdbc6.jar.
I now have a different error message saying :
error: OracleCallableStatement is not public in oracle.jdbc.driver; cannot be accessed from outside package
import oracle.jdbc.driver.OracleCallableStatement;
^
error: OracleTypes is not public in oracle.jdbc.driver; cannot be accessed from outside package
cstmt.registerOutParameter(3,oracle.jdbc.driver.OracleTypes.CURSOR);
^
Ant build file :
<javac srcdir="${src}" destdir="${classes}" source="1.5" target="1.5">
<classpath refid="cpath" />
</javac>
Not sure what exactly we should do to get the application working.
I had the same error with 2 different applications recently:
a Java 7 app on Tomcat 7 using odbc6.jar with Oracle 12 c database.
a legacy ASP application with Oracle 12 c database.
The second solution mentioned in the
same post you referred to - worked well for us.
Workaround: Set SQLNET.ALLOWED_LOGON_VERSION=8 in the oracle/network/admin/sqlnet.ora file.
We worked with our DBAs to set the above option on the sqlnet.ora on the database server. This resolved our issue. I hope it helps someone.
I faced the same error.
Got it resolved by without removing ojdbc14.jar.
step 1 : set SQLNET.ALLOWED_LOGON_VERSION=8
Step 2 : change
Connection conn = (Connection) DriverManager.getConnection("jdbc:oracle:thin:#server:port:sid", "username", "passwrd");
to
java.sql.Connection conn = DriverManager.getConnection("jdbc:oracle:thin:##server:port:sid", "username", "passwrd");
It will works!
After migrating from Oracle 11 to Oracle 12.
In my case lib directory had both OJDBC14.jar & OJDBC8.jar.
After removing older OJDBC14.jar it worked for me.
I had a problem connecting to DB after migration to ORACLE 12c.
The error was:java.sql.SQLException: ORA-28040: No matching authentication protocol.
After setting this parametar SQLNET.ALLOWED_LOGON_VERSION=8 it was solved.
Thank you very much
If you are migrating your application to ojdbc6, one probable reason would be your old classes (compatible to old ojdbc version) might not be getting the class OracleTypes. Easiest way would be changing import statement from import oracle.jdbc.driver.OracleTypes; to import oracle.jdbc.OracleTypes; from the classes where you are getting the error.
I was getting ORA-28040 in Sqldeveloper (ver. 1.5.5) for all connections. I set SQLNET.ALLOWED_LOGON_VERSION=8 but the error didn't go away. Then I enabled "Use OCI/Thick driver" under Tools->Preferences->Database->Advanced Parameters. That did the trick.
The main problem is that the JDBC thin client of the 10g uses the
SHA-1 authentication protocol, this protocol is not allowed in the
12c, so it gives the error.
In my Oracle I could not find the sqlnet.ora file so I had to create it with the command:
vi $ORACLE_HOME/network/admin/sqlnet.ora
And I added the following:
SQLNET.ALLOWED_LOGON_VERSION=10
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=10
SQLNET.ALLOWED_LOGON_VERSION_SERVER=10
SQLNET.ALLOWED_LOGON_VERSION=8
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
SQLNET.AUTHENTICATION_SERVICES = (NONE)
So I had to stop and start the listener:
lsnrctl stop
lsnrctl start
Finally, I restart the database.
This should only be a workaround
We had this error when we moved from 11g to 12c, I am able to get correct response through tnsping "servername" (this is the first step that we need to check through cmd).
After this we realize that database server is enable to handle only 64bit request (In my case I was using WinSQL and it always check for 32bit).
So to correct this, ask your admin to eanble server for 32 bit request as well or you can move to SQL developer which work for me.
I'm looking to call BigQuery from R Studio, installed on a Google Compute Engine.
I have the bq python tool installed on the instance, and I was hoping to use its service accounts and system() to get R to call bq command line tool and so get the data.
However, I run into authentication problems, where it asks for a browser key. I'm pretty sure there is no need to get the key due to the service account, but I don't know how to construct the authetication from with R (it runs on RStudio, so will have multiple users)
I can get an authetication token like this:
library(RCurl)
library(RJSONIO)
metadata <- getURL('http://metadata/computeMetadata/v1beta1/instance/service-accounts/default/token')
tokendata <- fromJSON(metadata)
tokendata$$access_token
But how do I then use this to generate a .bigqueryrc token? Its the lack of this that triggers the authetication attempt.
This works ok:
system('/usr/local/bin/bq')
showing me bq is installed ok.
But when I try something like:
system('/usr/local/bin/bq ls')
I get this:
Welcome to BigQuery! This script will walk you through the process of initializing your .bigqueryrc configuration file.
First, we need to set up your credentials if they do not already exist.
******************************************************************
** No OAuth2 credentials found, beginning authorization process **
******************************************************************
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fbigquery&redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&response_type=code&client_id=XXXXXXXX.apps.googleusercontent.com&access_type=offline
Enter verification code: You have encountered a bug in the BigQuery CLI. Google engineers monitor and answer questions on Stack Overflow, with the tag google-bigquery: http://stackoverflow.com/questions/ask?tags=google-bigquery
etc.
Edit:
I have managed to get bq functioning from RStudio system() commands, by skipping the authetication by logging in to the terminal as the user using RStudio, autheticating there by signing in via the browser, then logging back to RStudio and calling system("bq ls") etc..so this is enough to get me going :)
However, I would still prefer it if BQ can be autheticated within RStudio itself, as many users may log in and I'll need to autheticate via terminal for all of them. And from the service account documentation, and the fact I can get an authetication token, hints at this being easier.
For the time being, you need to run 'bq init' from the command line to set up your credentials prior to invoking bq from a script in GCE. However, the next release of bq will include support for GCE service accounts via a new --use_gce_service_account flag.
I have a problem when trying to connect to MySQL database using Windows OBDC driver. There are plenty of search hits regarding the obvious... people are using old versions, however, I'm not.
mysqld is on CentOS 6.4 32bit
./usr/libexec/mysqld Ver 5.1.69 for redhat-linux-gnu on i386 (Source distribution)
So I'm at a loss to understand where any pre 4.1.1 protocol is coming from. Any ideas?
I guess that if you ask the right question its easier to find the answer.
In this case "my" problem relates to how the passwords are hashed and stored in the database. Legacy passwords were stored with a shorter hash that's now deprecated.
A few important points:
mysql_upgrade cannot and does not upgrade passwords, nor does it warn about it in some versions, see: http://bugs.mysql.com/bug.php?id=65461.
Even it you have mostly the latest server and clients, all it takes is one legacy client somewhere to create a legacy password and then you'll have trouble with that account no matter what client tries to use it.
Different versions have treated the situation differently so you can be sitting on some legacy passwords in your database and then suddenly, for no apparent reason, some accounts stop working... this is because of how different versions chose to handle the situation.
You cannot upgrade passwords. You must know what they are and you must change them.
EDIT: To be more clear, you must change the password that is stored with the shorter hash using a new client that uses longer hashes. By doing so you will be writing that accounts password with the longer hash, at which point nothing should be flagging attempts to access the account any more. If the problem is recurrent you should be looking for the older clients at your site which are still writing passwords with the deprecated hash length.
MySQL Workbench 6.08 in the Manage Server Connections, Connection tab, Advanced sub-tab you must check the box 'Use the old authentication protocol.'
Try installing old version driver 3.51.30: http://dev.mysql.com/downloads/connector/odbc/5.1.html#downloads
It works on my Mysql Ver 5.0.24a-community
I ran into this while using the ODBC Connector for Windows to connect to a Percona 5.5 server. which has secure_auth disabled.
From what I found the ODBC connector, unlike MySql Workbench, does not support an option to authenticate logins which use the old 16-byte hashed passwords. There is a bug report regarding this, but it appears the assignee is/was confused about the feature request (See bug #71234).
I was able to update the mysql login to use the new 41-byte hash using these commands:
set old_passwords=0;
set password=password('yourpasswordhere');
As I mentioned our server has secure_auth disabled, which appears to cause password() to return old_password() results. Running set old_passwords=0; will enable the password() method to generate the new 41-byte hashes (for the duration of your session).
I had a similar error message when remotely trying to access my MySQL database. Using Directadmin I easily changed the MySql database password as suggested above. This automatically generated the password using the newer hash method. This solved the remote connection problem instantly.
I found another solution in case anyone hits this - very weird -
Install the 5.1 64 bit ODBC driver - verify an ODBC connection by itself works, if you can connect then you should be able to after doing #2
Click on Linked Servers - Providers - right click on MSDASQL, click on Properties
uncheck "Allow inprocess" - which is a good thing to do unless you need to insert TEXT and NTEXT fields.
Create your linked server connection or test the one you have been fighting with - lol
When I had "Allow inprocess" checked I still got the error even though the ODBC system DSN worked fine. I'm assuming because I had a mixture of 5.2 (with servers that worked fine) and 5.1 for the servers that didn't, SQL was sharing the processes because the 5.1 driver does not give that error.
If you can't change your server, perhaps you can change your client: http://bugs.mysql.com/bug.php?id=75425