Determine where the driver information is stored for odbc - r

I got my connection from R to a MS SQL Server database working on a Mac. I was able to get the right driver for the connection string by issuing the odbcListDrivers() command. However, I don't know if it was there or not before I started installing things like freetds and brew install --no-sandbox msodbcsql17 mssql-tools.
What I'd like to know is: with the odbcListDrivers(), how do I know what the file is it pulls from (a .ini one, I'm thinking)? I struggled because the configuration files I configured for freetds and another odbc.ini file don't affect what R is reading I believe, and made troubleshooting difficult.
In case I need to configure more drivers or DSN (?) entries.

Related

How to add odbc driver to aws glue python shell

I want to use pyodbc in aws qlue python shell but it require odbc driver. Currently I get error like "Can't open lib 'ODBC Driver 17 for SQL Server' : file not found (0) (SQLDriverConnect)"
Is there any way to install odbc driver into glue
I wanted to do the same, but there is no straight forward way it seems. I guess, you could do it by adding a driver to your self-built Python .wheel or do some kind of run-time downloading of a driver etc.
I can offer a simpler alternative though:
pymssql does exactly this for you. It's a Python package that comes with the FreeTDS mssql odbc driver included. So it's just a pip install pymssql to get you started. I've tested it successfully on a Glue Pythonshell Job. You'll just need to add the package to the --additional-python-modules parameter of your job, so that is becomes available. Keep in mind, that you might still need to create a Glue Connection and add it to your Job. Even though you will not use the connection directly, you'll need it for the network connectivity from within your Job to your DB-Server.

fatal: no pg_hba.conf entry for host when trying to connect to a redshift database from a new Mac(Catalina)

I recently got a new system where i have been setting up and installing R and RStudio using Homebrew. However, when i try to connect to my redshift database from RStudio I keep running into missing pg_hba.conf entry error. I tried locating this file and came across below:
(base) ➜ workspace locate pg_hba.conf
/usr/local/Cellar/postgresql/13.2_1/share/postgresql/pg_hba.conf.sample
/usr/local/share/postgresql/pg_hba.conf.sample
not sure if these are the right ones. I installed Rpostgres and RpostgreSQL from R and tried brew install for postgres. None of these seem to be solving the issue. I get the same error. And I tried setting SSL = "require" in the connection statement. But it wont let me manually turn on SSL. I believe its some file missing, or path missing but i have not come across any solutions so far which can help me figure out how to solve this.
You need to add your Mac(Catalina) IP in pg_hba.conf file present on machine hosting redshift database. Else you need to allow all connections to redshift database by adding below line
# IPv4 local connections:
host all all 0.0.0.0/0 trust
Also, you need to change listen_addresses = 'localhost' to listen_addresses = '*' in your postgresql.conf file.
After that restart the database services and hopefully it will work fine.

SQL Anywhere linux odbc SQL_HANDLE_HENV error

I've got a linux server running RStudio, and I'm trying to connect to an SQL Anywhere database.
I have the drivers installed and configured, and I can connect using iSQL. When trying through RStudio, I continually get this error:
Error: nanodbc/nanodbc.cpp:983: 00000: [unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed
However, if I launch an R script straight from /opt/bin/r/rscript, it connects.
The same thing happens when trying to connect with Python through a conda environment in my home directory. However, if I launch by typing "python test.py" into the terminal, the connection succeeds.
I'm on Ubuntu 18.04 with the SQL Anywhere 17 drivers. Any ideas would be appriciated.
Thanks.
I just solved this issue with a very similar setup: Connecting to a SQL Anywhere database, where the connection works from R when launched from the command line, but not from RStudio, and gives the error:
SQLAllocHandle on SQL_HANDLE_HENV failed
The key was to set the environment variables in RStudio to match those in my regular shell. In my case, these were $ODBCINI and $LD_LIBRARY_PATH. I reset them as follows:
In the shell, ran the following to get the values being used by console R.
echo $ODBCINI
echo $LD_LIBRARY_PATH
In RStudio, ran Sys.getenv() to confirm these values were different.
Reset the variables to match with
Sys.setenv(ODBCINI = "[path from shell]/odbc.ini")
Sys.setenv("LD_LIBRARY_PATH" = paste0(Sys.getenv("LD_LIBRARY_PATH"),":[user path from shell]"))
With this setup, I was able to connect from RStudio.
I was having this same error message with my new M1 Mac when I tried to connect with SQLite from RStudio. The RStudio solutions manual (https://solutions.rstudio.com/db/best-practices/drivers/) still only lists connection strings for odbc.ini and odbcinst.ini on an Intel Macs. When I changed my SQLite Driver in odbcinst.ini (located in /opt/homebrew/etc) after the Homebrew install to /opt/homebrew/lib/libsqlite3odbc.dylib instead of /usr/local/lib/libsqlite3odbc.dylib, it finally worked.

Connect to ORACLE via R, using the info in sql developer

I am working on a machine without admin rights. I use sql developer to connect to an internal database. I would like to connect via R also.
Is there any way I can do this, without admin rights? Some solutions require me to set up a systemDNS - which I can not do.
Other requires me to install jvm.dll
My environment: Windows7, sqldeveloper, connection method is via TNS file.
Connecting to SQL Developer via R is far more difficult than other databases I've encountered. It's important that you have jdbc6.jar installed on your machine, and that you know the file path to where it was installed. Installing the jar file does not require admin rights. You can install the jar file from Oracle's website.
I use the RJDBC package to connect like so:
library(RJDBC)
jdbcDriver <- JDBC("oracle.jdbc.OracleDriver", classPath = "file path to where ojdbc6.jar is installed on your computer")
jdbcConnection <- dbConnect(jdbcDriver, "jdbc:oracle:thin:#YOUR_SERVER","YOUR_USERNAME","YOUR_PASSWORD")
You can then test the connection with a number of commands; I typically use:
dbListTables(jdbcConnection)
Another favorite of mine is to use dbplyr for dplyr-like functions when working with databases:
library(dbplyr)
tbl(jdbcConnection, "SAMPLE_TABLE_NAME")
The resulting output will be the data from the queried table in tibble form.
You can set the environment variables in your R session.
Sys.setenv(OCI_LIB64="/Path/to/instantclient",OCI_INC="/Path/to/instantclient/sdk/include")
You can put this in the file .Rprofile in your home directory, and RStudio will run it each time you begin a new session. Once you have this in .Rprofile you should be able to install ROracle.

odbcConnectExcel function from RODBC package for R not found on Ubuntu

Installing the RODBC package on Ubuntu is a bit of a kludge. First I learned to install the following:
$ sudo apt-get install r-cran-rodbc
That wasn't good enough as the package was still looking for header files. I solved this issue by:
$ sudo apt-get install unixodbc-dev
Good, RODBC installed properly on the Ubuntu machine. But when I try to run the following script:
## import excel file from Dropbox
require("RODBC")
channel <- odbcConnectExcel("~/Dropbox/DATA/SAMPLE/petro.xls")
petro <- sqlFetch (channel, "weekly")
odbcClose(channel)
str(petro)
head(petro)
I get an error thrown that function odbcConnectExcel not found. I checked the case of each letter, making sure it was not a simple typo. Nope. Then I ran this same script on a Windows R installation (file path different, of course) and the script works.
Any idea of why Ubuntu R installation cannot find the odbcConnectExcel function and how I can get this to work?
That functionality is available where Excel is available. In other words: not on Ubuntu.
For reference, from the R Data Import / Export manual (with my highlighting):
4.3.2 Package RODBC
Package RODBC on CRAN provides an
interface to database sources
supporting an ODBC interface. This is
very widely available, and allows the
same R code to access different
database systems. RODBC runs on
Unix/Linux, Windows and Mac OS X, and
almost all database systems provide
support for ODBC. We have tested
Microsoft SQL Server, Access, MySQL,
PostgreSQL, Oracle and IBM DB2 on
Windows and MySQL, Oracle, PostgreSQL
and SQLite on Linux.
ODBC is a client-server system, and we
have happily connected to a DBMS
running on a Unix server from a
Windows client, and vice versa.
On Windows ODBC support is normally
installed, and current versions are
available from
http://www.microsoft.com/data/odbc/ as
part of MDAC. On Unix/Linux you will
need an ODBC Driver Manager such as
unixODBC (http://www.unixODBC.org) or
iOBDC (http://www.iODBC.org: this is
pre-installed in Mac OS X) and an
installed driver for your database
system.
Windows provides drivers not just for
DBMSs but also for Excel (.xls)
spreadsheets, DBase (.dbf) files and
even text files. (The named
applications do not need to be
installed. Which file formats are
supported depends on the the versions
of the drivers.) There are versions
for Excel 2007 and Access 2007 (go to
http://download.microsoft.com, and
search for Office ODBC, which will
lead to AccessDatabaseEngine.exe), the
`2007 Office System Driver'.
I've found RODBC to be a real pain in the Ubuntu. Maybe it's because I don't know the right incantations, but I switched to RJDBC and have had much better luck with it. As discussed here.
As Dirk says, that wont solve your Excel problem. For writing Excel I've had very good luck with the WriteXLS package. In Ubuntu I found it quite easy to set up. I had Perl and many of the packages already installed and had to simply install Text::CSV_XS which I installed with the GUI package manager. The reason I like WriteXLS is the ability to write data frames to different sheets in the Excel file. And now that I look at your question I see that you want to READ Excel files not WRITE them. Hell. WriteXLS doesn't do that. Stick with gdata, like Dirk said in his comments:
gdata on CRAN and you are going to want the read.xls() function:
read.xls("//path//to/excelfile.xls", sheet = 1, verbose=FALSE, pattern, ...,
method=c("csv","tsv","tab"), perl="perl")
you may need to run installXLSXsupport which installs the needed Perl modules.
read.xls expect sheet numbers, not names. The method parameter is simply the intermediate file format. If your data has tabs then don't use tab as the intermediate format. And likewise for commas and csv.

Resources