Ubuntu Shiny server connecting to Jet/ACE databases - r

Can it be done: Reading data stored in an MS Access (.accdb) database, from within Shiny apps running on Ubuntu Shiny server?
We have no knowledge of SQL Server Express. We have our data organized in simple MS Access databases, and want to deploy our Shiny apps (who visualize this data) on an Ubuntu Shiny server.
It all works on our local Windows machines, but how to make it also work with an Ubuntu Shiny server?
I understand that with our minimal knowledge of database systems, it is not straightforward to go porting our databases to SQL Server Express.
Thanks in advance for your expertise!

I had a bit of a job setting this up myself. I had to take info from several sources to get all the required packages – the following is a list of good info sources:
http://guywyant.info/log/206/connecting-to-ms-sql-server-from-ubuntu/
http://driftharmony.wordpress.com/2008/08/15/connecting-ubuntu-804-to-microsoft-sql-server/
https://code.google.com/p/django-pyodbc/wiki/FreeTDS
FreeTDS working, but ODBC cannot connect
The 3 files were ultimately configured thus:
Detail of file: /etc/odbc.ini
[NameThis]
Driver = FreeTDS
TDS_Version=8.0
Servername = YourServer
Port = 1433
Database = testing
Trace = No
Detail of file: /etc/odbcinst.ini
[FreeTDS]
Description = FreeTDS
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Detail of file: /etc/freetds/freetds.conf
# $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
# This file is installed by FreeTDS if no file by the same name is found in the installation directory.
# For information about the layout of this file and its settings, see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database server specific section
[global]
# TDS protocol version
; tds version = 4.2
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field. Try setting 'text size' to a more reasonable limit
text size = 64512
# Test Kx
[NameThis]
host = YOUR IP
port = 1433
tds version = 7.2

Related

How to install ODBC Driver 17 for SQL Server on a Azure Databricks cluster with no internet access

My goal is to access Azure Synapse Analytics from Azure Databricks. The first thing that came in mind is to use the spark driver com.databricks.spark.sqldw. But for that, the database user needs to be db_owner in the database, which is not suitable, since users could mess around with Synapse. I just want users to read data from Synapse using their own Active Directory accounts.
My second shot then is to try using ODBC driver (or JDBC) to access Synapse as we normally do in local Python scripts. The problem is that our Databricks clusters have no internet access, so we can't just run apt-get like commands (in order to install the ODBC drivers).
So, any of these questions may help me to solve the problem:
How do I copy a file from Azure Storage Account Gen2 to the local databricks cluster file system? I put the ODBC Driver 17 for SQL Server (msodbcsql17_17.10.1.1-1_amd64.deb) in a Container in Storage Account and I can see it using dbutils. But, can I copy that file to the databricks cluster filesystem?
Is it possible to use the default spark driver com.databricks.spark.sqldw for accessing Azure Synapse Analytics with SELECT permission only?
For anyone wondering why I'm not using the Synapse Apache Spark Pool is because I can't run queries like SELECT * FROM A INNER JOIN B.... And of couse, the Databricks UI is much better ; )
Thanks for any help.
Generically answering to my own question of "how do I access a file from Azure Storage Acount Gen2 using the Databricks cluster filesystem" is quite simple. Just had to mount it using dbutils and the mount will automatically appears in /dbfs/mnt. For instance (python script):
dbutils.fs.mount(
source = "wasbs://<container>#<storage>.blob.core.windows.net"
, mount_point = "/mnt/my_adlsgen2"
, extra_configs = {"fs.azure.account.key.<storage>.blob.core.windows.net":"account key"})
and the mount will be at:
/dbfs/mnt/my_adlsgen2
Finally, you can go to the above folder and list files or do whatever you want with it.
In order to install the ODBC Driver that is in the ADLS Gen2, I had to open a new cell and run:
%sh
echo msodbcsql17 msodbcsql/ACCEPT_EULA boolean true | sudo debconf-set-selections
sudo dpkg -i /dbfs/mnt/my_adlsgen2/msodbcsql17_17.10.1.1-1_amd64.deb
And checking if it installed ok:
%sh
odbcinst -j
unixODBC 2.3.6
DRIVERS............: /etc/odbcinst.ini
SYSTEM DATA SOURCES: /etc/odbc.ini
FILE DATA SOURCES..: /etc/ODBCDataSources
USER DATA SOURCES..: /root/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
and lastly:
%sh cat /etc/odbcinst.ini
[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.1.1
UsageCount=1

Wsrep MariaDB Crash Thread pointer: 0x0

Cluster is constantly crashing.
Sometimes it works stable for 2 days. Sometimes five minutes after 2 servers stand.
server.cnf was tested with different parameters. The result did not change
When the setup is not installed, the single server is running without problems.
Clean installation several times.
iptables stop
selinux disable
yum repo
[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/10.1/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
Centos 6.9 64bit
yum install MariaDB-server MariaDB-client
Erorr Log
/var/lib/mysql/maria1.err
180305 16:17:03 [ERROR] mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
To report this bug, see https://mariadb.com/kb/en/reporting-bugs
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.
Server version: 10.1.31-MariaDB
key_buffer_size=134217728
read_buffer_size=131072
max_used_connections=5
max_threads=153
thread_count=2
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 467163 K bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0x0 thread_stack 0x48400
/etc/my.cnf.d/server.cnf
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
bind-address=0.0.0.0
skip-name-resolve
#
# * Galera-related settings
#
[galera]
# Mandatory settings
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://10.1.1.11,10.1.1.12"
wsrep_cluster_name='galera_cluster'
wsrep_node_address='10.1.1.11'
wsrep_node_name='maria1'
wsrep_sst_method=rsync
wsrep_sst_auth=sst_user:!!PassWordSs!!
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
#
# Allow server to accept connections on all interfaces.
#
#bind-address=0.0.0.0
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.1 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.1]
Test DB structure
Please report this crash to MariaDB bug tracker

How to make a client / server connection using Rserver and Windows Server 2008

I am searching for a robust solution to perform extensive computations on a remote server, dedicated to computational tasks. The server is on Windows 2008 R2 and has R x64 3.4.1 installed on it. I've searched for free solutions and am now focusing on the Rserver/RSclient packages solutions.
However, I can't connect any client (using RSclient) to the instanced server.
This is how I'm proceeding at the moment from the server side:
library(Rserve)
run.Rserve(config.file = "Rserv.conf")
using the following Rserv.conf file:
port 6311
remote enable
plaintext enable
control enable
r-control enable
The server is now intanciated using the Rsession (It's a bit ugly, but will change that latter on):
running Rserve in this R session (pid=...), 1 server(s)
Now, i'm trying to connect using a remote computer (Client-side) using:
library(RSclient)
c = RS.connect(host = "...")
The connection then seems to succeed, checking for c:
> c
Rserve QAP1 connection 0x000000000fbe9f50 (socket 764, queue length 0)
The error occurs when i try to eval anything, for example:
> RS.server.eval(c,"0<1")
Error in RS.server.eval(c, "0<1") : command failed with status code 0x4e: no control line present (control commands disabled or server shutdown)
I've read the available guides but still failed in connecting. What is wrong? It seems to be related to control lines but I authorized them in the config file.
for me the problem was solved by initiating the Rserve instance with the command:
R CMD Rserve --RS-port 9000 --RS-enable-remote --RS-enable-control
instead of starting it in the R environment (library(Rserve), run.Rserve(config.file = "Rserv.conf")). You may try this on Windows as well.
Refer https://github.com/s-u/Rserve/wiki/rserve.conf.
port 6311
remote enable -> it should be remote true
plaintext enable
control enable
r-control enable
Likewise refer the link and try with actual values

Establishing MS Access connection with UnixODBC and FreeTDS on Mac

I've been trying to establish a connection to an MS Access database I have on my local hard drive using FreeTDS and UnixODBC. My ultimate goal is to open the connection in R via RODBC and implement some SQL scripts developed for this specific database to extract data. I've followed advice from this page (How do I install RODBC on Mac OS X Yosemite with unixodbc and freetds?), but am still having trouble.
When I implement isql in terminal I get the following error message.
[S1000][unixODBC][FreeTDS][SQL Server]Unable to connect to data source
[01000][unixODBC][FreeTDS][SQL Server]Unknown host machine name.
[ISQL]ERROR: Could not SQLConnect
I'm assuming my error is in how I've identified the host in my various setup files, which are as follows.
freetds.conf
[global]
; tds version = 8.0
; dump file = /tmp/freetds.log
; debug flags = 0xffff
; timeout = 10
; connect timeout = 10
text size = 64512
[my_db]
# insert the actual host below
host = My_computer_name.local
port = 1433
tds version = 8.0
odbc.ini
[my_db]
Driver = MSSQL
Servername = My_computer_name.local
Port = 1433
Database = /filepath_to_db/my_db.mdb
TDS_Version = 8.0
odicinst.ini
[MSSQL]
Description = Microsoft SQL Server driver
Driver = /usr/local/Cellar/freetds/1.00.39/lib/libtdsodbc.so
Setup = /usr/local/Cellar/freetds/1.00.39/lib/libtdsodbc.so
FreeTDS is for connecting to Microsoft SQL Server and Sybase databases. It is not designed to work with Microsoft Access databases.

Restore Postgres Database from pg_data directory

I have a broken VM that won't boot with an old postgresql database on it (used to run PostgreSQL 8.4). I have access to the file system (and the pg_data directory).
How can I extract the data (or restore the database) from this data directory?
Is it as simple as copying the contents of this directory into a working 8.4 pg_data directory?
Actually it is basically that simple. Here are the steps I took to get this working:
1) Archive the data directory (/var/lib/postgres/8.4/data) into a tar.gz file.
2) Move the file to a working workstation (my desktop, running a Debian-based distribution of Linux)
3) Install the PostgreSQL APT repository and install postgresql-8.4 (or version which was on the broken server) using the instructions found at the PostgreSQL Linux downloads for Ubuntu.
4) Extract the contents of the tar.gz file into the main directory for the "new" PostgreSQL 8.4 installation (/var/lib/postgresql/8.4/main/).
5) Modify the postgresql.conf to change the port = 5432 to port = 5433. This allows us to control which version of PostgreSQL we connect to using the port number (assuming we have the latest stable version on our workstation, such as 9.1). So 9.1 will stay on the default 5432, and 8.4 will be on 5433.
6) Modify the ownership of the extracted data directory so postgres is the owner: chown -R postgres:postgres /var/lib/postgresql/8.4/main/*
7) Start the postgres service: service postgresql start (you'll see both versions start up)
8) su as postgres and connect using port 5433, and the database name that was on the old server: psql -p 5433 DatabaseName

Resources