Set database size in odbc and snowsql? - odbc

I connect to snowflake via both odbc and snowsql. For whatever reason, when querying via odbc and looking at running queries in the history tab, the size is set to Large. When querying via SnowSQL the size is set to small.
I did some searching for odbc and snowsql config but could not tell if this is something I set within the UI or via the connection settings in either client.
Can I set size via a connection setting for either odbc or SnowSQL? If yes, how?
E.g. my current sanitized connection settings:
SnowSQL:
Config file:
[connections]
authenticator = SNOWFLAKE_JWT
private_key_path = /root/.snowsql/rsa_key.p8
[variables]
#Loads these variables on startup
#Can be used in SnowSql as select $example_variable
example_variable=27
[options]
# If set to false auto-completion will not occur interactive mode.
auto_completion = True
# going to use start and end date variables from environment in the sql script
variable_substitution = True
# main log file location. The file includes the log from SnowSQL main
# executable.
log_file = /root/.snowsql/snowsql_rt.log
# bootstrap log file location. The file includes the log from SnowSQL bootstrap
# executable.
log_bootstrap_file = /root/.snowsql/log_bootstrap
# Default log level. Possible values: "CRITICAL", "ERROR", "WARNING", "INFO"
# and "DEBUG".
log_level = DEBUG
# Timing of sql statments and table rendering.
timing = True
# Table format. Possible values: psql, plain, simple, grid, fancy_grid, pipe,
# orgtbl, rst, mediawiki, html, latex, latex_booktabs, tsv.
# Recommended: psql, fancy_grid and grid.
output_format = psql
# Keybindings: Possible values: emacs, vi.
# Emacs mode: Ctrl-A is home, Ctrl-E is end. All emacs keybindings are available in the REPL.
# When Vi mode is enabled you can use modal editing features offered by Vi in the REPL.
key_bindings = emacs
# OCSP Fail Open Mode.
# The only OCSP scenario which will lead to connection failure would be OCSP response with a
# revoked status. Any other errors or in the OCSP module will not raise an error.
# ocsp_fail_open = True
# Enable temporary credential file for Linux users
# For Linux users, since there are no OS-key-store, an unsecure temporary credential for SSO can be enabled by this option. The default value for this option is False.
# client_store_temporary_credential = True
# Repository Base URL
# The endpoint to download the SnowSQL main module.
repository_base_url = https://sfc-repo.snowflakecomputing.com/snowsql
I use this config when connecting with the following:
snowsql -f ${INPUT_QUERY_FILE} \
-o quiet=true \
-o friendly=false \
-o header=true \
-o output_format=csv \
-o output_file=output_data/data.csv \
--accountname=${INPUT_ACCOUNT_NAME} \
--username=${INPUT_USER_NAME} \
--dbname=${INPUT_DBNAME} \
--private-key-path=/root/.snowsql/rsa_key.p8 \
--config /config
This runs as expected and returns the query result. But it uses a small setting when querying Snowflake. Is there a way to adjust this, e.g. large?
And for odbc:
[snowflake]
Description=SnowflakeDB
Driver=SnowflakeDSIIDriver
Locale=en-US
SERVER=<ourorg>.us-east-1.snowflakecomputing.com
PORT=443
SSL=on
ACCOUNT=<ourorg>.us-east-1
# change to your snowflake user_name
UID=MY_NAME
# change to /home/<your-home>/keys/rsa_key.p8
PRIV_KEY_FILE=/home/my_name/keys/rsa_key.p8
# change "blahblah' to the passphrase you set for your private key in step 1.c
PRIV_KEY_FILE_PWD=blahblah
AUTHENTICATOR=SNOWFLAKE_JWT
And then I connect with (R)
sfconn <- DBI::dbConnect(odbc::odbc(), dsn = 'snowflake', warehouse = 'DATA_SCIENCE_WH_L',
database = 'ourorg', role='DATA_SCIENCE_FULL')
This connects fine and I'm able to query Snowflake using odbc client. The query is run with the 'large' settings. Is it possible to adjust this?

By 'size' do you mean the virtual warehouse size? Are you specifying the warehouse when connecting via SnowSQL or defaulting to one that is sized as small? I see on the ODBC line that you are using a warehouse name of DATA_SCIENCE_WH_L while you are not listing a warehouse (or role) on the SnowSQL connection string.

Related

How to collect jolokia data via telegraf but just if the jolokia connection is active?

When my application is up, telegraf works fine and collects data related to jolokia since my application opens the port 11722 that telegraf uses to get the metrics. But then, when my application is down, telegraf starts to get errors since it can't connect to Jolokia. My telegraf version is 1.5.3 and this is a Production environment, so I don't have much flexibility to change the version. Is there a way to collect the jolokia metrics just when my application is up and running?
I've tried to create a script to check if jolokia was running and use with a tag that then I could use with my agent, but this didn't work:
[[inputs.exec]]
commands = ["sh /local/1/home/svcegctp/telegraf/inputs/scripts/check_jolokia.sh"]
timeout = "1s"
data_format = "influx"
name_override = "jvm_status"
[inputs.exec.tags]
running = "true"
(...)
[[inputs.jolokia2_agent]]
# Add agents URLs to query
urls = ["http://localhost:11722/jolokia"]
[inputs.jolokia2_agent.tags]
running = "true"
This is my script:
check_jolokia.sh
#!/bin/bash
if curl -s -u <username>:<password> http://localhost:11722/jolokia/version >/dev/null 2>&1; then
echo "jvm_status running=true"
else
echo "jvm_status running=false"
fi

How to run Jupyter notebooks locally with password and no token?

Since update, jupyter notebook command will run jupyter with a token, by default. So that you have to open a URL like http://localhost:8889/?token=46b110632ds2f...
It is not very inconvenient, since you need to copy-paste this token from terminal. How can I run a jupyter server with a predefined password, so that I can save it in my browser and don't need to copy-paste the token from the command line?
You can from the command line run:
jupyter notebook password
The command prompt will ask you for the password and then set the hash in a JSON document in your configuration directory.
You can determine that with:
jupyter --config-dir
If you delete the file, the password will no longer work.
You may wish to set up SSL as well.
You can make a configuration to all option in a file, generated by command jupyter notebook --generate-config. This will produce a file with all configuration explained and commented out in folder ~/.jupyter/jupyter_notebook_config.py .
In this file you can un-comment
## Allow password to be changed at login for the notebook server.
#
# While loggin in with a token, the notebook server UI will give the opportunity
# to the user to enter a new password at the same time that will replace the
# token login mechanism.
#
# This can be set to false to prevent changing password from the UI/API.
c.NotebookApp.allow_password_change = True
and set some starting or no with token.
## Token used for authenticating first-time connections to the server.
#
# When no password is enabled, the default is to generate a new, random token.
#
# Setting to an empty string disables authentication altogether, which is NOT
# RECOMMENDED.
c.NotebookApp.token = ''
$ jupyter notebook --port 5000 --no-browser --ip='*' --NotebookApp.token='' --NotebookApp.password=''
this will give the following warnings. understand the risk.
[W 09:04:50.273 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using encryption. This is not recommended.
[W 09:04:50.274 NotebookApp] WARNING: The notebook server is listening on all IP addresses and not using authentication. This is highly insecure and not recommended.

Ubuntu Shiny server connecting to Jet/ACE databases

Can it be done: Reading data stored in an MS Access (.accdb) database, from within Shiny apps running on Ubuntu Shiny server?
We have no knowledge of SQL Server Express. We have our data organized in simple MS Access databases, and want to deploy our Shiny apps (who visualize this data) on an Ubuntu Shiny server.
It all works on our local Windows machines, but how to make it also work with an Ubuntu Shiny server?
I understand that with our minimal knowledge of database systems, it is not straightforward to go porting our databases to SQL Server Express.
Thanks in advance for your expertise!
I had a bit of a job setting this up myself. I had to take info from several sources to get all the required packages – the following is a list of good info sources:
http://guywyant.info/log/206/connecting-to-ms-sql-server-from-ubuntu/
http://driftharmony.wordpress.com/2008/08/15/connecting-ubuntu-804-to-microsoft-sql-server/
https://code.google.com/p/django-pyodbc/wiki/FreeTDS
FreeTDS working, but ODBC cannot connect
The 3 files were ultimately configured thus:
Detail of file: /etc/odbc.ini
[NameThis]
Driver = FreeTDS
TDS_Version=8.0
Servername = YourServer
Port = 1433
Database = testing
Trace = No
Detail of file: /etc/odbcinst.ini
[FreeTDS]
Description = FreeTDS
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Detail of file: /etc/freetds/freetds.conf
# $Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
# This file is installed by FreeTDS if no file by the same name is found in the installation directory.
# For information about the layout of this file and its settings, see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database server specific section
[global]
# TDS protocol version
; tds version = 4.2
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field. Try setting 'text size' to a more reasonable limit
text size = 64512
# Test Kx
[NameThis]
host = YOUR IP
port = 1433
tds version = 7.2

Openstack-Keystone failing to start

I've tried almost everything in the past couple of days to get keystone running to no avail.
The setup is all on the same host, the virtualization and openstack and keystone are all on the same host, so I've tried setting up keystone with 127.0.0.1 and localhost and the IP of the host with no luck
[DEFAULT] log_file = /var/log/keystone/keystone.log
admin_token = ***
bind_host = 192.168.33.11
public_port = 5000
admin_port = 35357
compute_port = 8774
# === Logging Options ===
# Print debugging output verbose = True
# Print more verbose output
# (includes plaintext request logging, potentially including passwords)
# debug = False
# Name of log file to output to. If not set, logging will go to stdout. log_file = keystone.log
# The directory to keep log files in (will be prepended to --logfile) log_dir = /var/log/keystone
# Use syslog for logging.
# use_syslog = False
# syslog facility to receive log lines
# syslog_log_facility = LOG_USER
# If this option is specified, the logging configuration file specified is
# used and overrides any other logging options specified. Please see the
# Python logging module documentation for details on logging configuration
# files. log_config = logging.conf
# A logging.Formatter log message format string which may use any of the
# available logging.LogRecord attributes.
# log_format = %(asctime)s %(levelname)8s [%(name)s] %(message)s
# Format string for %(asctime)s in log records.
# log_date_format = %Y-%m-%d %H:%M:%S
# onready allows you to send a notification when the process is ready to serve
# For example, to have it notify using systemd, one could set shell command:
# onready = systemd-notify --ready
# or a module with notify() method:
# onready = keystone.common.systemd
[sql] connection = mysql://keystone:***#localhost/keystone
# idle_timeout = 200
[identity] driver = keystone.identity.backends.sql.Identity
[catalog] template_file = /etc/keystone/default_catalog.templates driver = keystone.catalog.backends.sql.Catalog
# dynamic, sql-based backend (supports API/CLI-based management commands)
# driver = keystone.catalog.backends.sql.Catalog
# static, file-based backend (does *NOT* support any management commands)
# driver = keystone.catalog.backends.templated.TemplatedCatalog
# template_file = default_catalog.templates
[token] driver = keystone.token.backends.sql.Token
# driver = keystone.token.backends.kvs.Token
# Amount of time a token should remain valid (in seconds)
# expiration = 86400
I've enabled logging in the logging.conf file and set the level to DEBUG and INFO, however nothing in log files.
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# service openstack-keystone restart
Stopping keystone: [FAILED]
Starting keystone: [ OK ]
[root#* keystone]# ps aux | grep keystone
root 25580 0.0 0.0 103236 880 pts/1 S+ 09:41 0:00 grep keystone
[root#* keystone]#
Any ideas will be greatly appreciated.Thank you
As I mentioned in the comment, I've never seen a config file with the section headings on the same line as config option:
[DEFAULT] log_file = /var/log/keystone/keystone.log
I've also seen it like this instead:
[DEFAULT]
log_file = /var/log/keystone/keystone.log
However, I have no idea if this is related to your issue.
To enable debug-level logging, make sure you set the following in /etc/keystone/logging.conf:
[logger_root]
level=DEBUG
Then try running keystone manually instead of as a service:
$ sudo -u keystone bash
$ HOME=/var/lib/keystone keystone-all --debug
Hopefully you'll see a relevant error message on standard out.
(I believe it will still send the logging to /var/log/keystone/keystone.log, not sure how to actually get it to log to standard out when running manually like this).
Add a valid token for admin_token. It should not be "*".
Check the below line:
[sql] connection = mysql://keystone:*#localhost/keystone
It should be something like:
connection = mysql://keystone:keystone#localhost/keystone
Refer to this url for an example keystone.conf file
http://docs.openstack.org/trunk/openstack-compute/install/yum/content/keystone-conf-file.html
I ran into this issue as well. I am running on Ubuntu 12.04LTS. What i found was the the service start command in /etc/init/keystone.conf is using start-stop-daemon to run the service. It was written for a newer version than the one on my box. The --chdir variable is not accepted as an input. once i removed that line keystone started right up.
Try running:
start-stop-daemon --start --chuid keystone --name keystone --exec /usr/bin/keystone-all
/etc/init/keystone.conf after
description "Keystone API server"
author "Soren Hansen <soren#linux2go.dk>"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
exec start-stop-daemon --start --chuid keystone \
--name keystone \
--exec /usr/bin/keystone-all
Check if your IP-adress is equal to HOST_IP=... in localrc
This might be due to keystone not getting started properly and therefore port 35357 is not in listening mode.
This seems to be anomalous behavior of service keystone.
I am mentioning steps which have worked on my system for havana installtion on Ubuntu 12.04 Kernel version 3.2.0-67-generic. After a day of headache around this issue. Try these steps, preferably in the same order.
1) Remove keystone package:-
apt-get remove keystone
2) Reboot your system
reboot
3) After reboot again INSTALL KEYSTONE.
apt-get install keystone
4) Check status of keystone service
service keystone status
It will show start/running
5) Now do the necessary changes you want to do in /etc/keystone/keystone.conf
after making changes in conf file DO NOT RESTART KEYSTONE SERVICE
Use stop and start command to make an effect of restart but don't restart.
service keystone stop
service keystone start
For further help, pasting a dump of my CLI :-
http://pastebin.com/sduuFCL7
There are multiple problems with the icehouse documentations and install. packstack is broken so the only way to get started is to manually follow the upstream docs for your distro. keystone is very important to set up first correctly before moving on, because other services rely on it.
the paste-file /usr/share/keystone/keystone-dist-paste.ini should be copied to /etc/ to be accessible to the config scripts like this:
cp /usr/share/keystone/keystone-dist-paste.ini /etc/keystone/
chown keystone:keystone /etc/keystone/*
make sure to update keystone.conf with the new config_file value
documentation is wrong about the mysql connection, it should go to [sql] and not [database] so:
openstack-config --set /etc/keystone/keystone.conf sql connection mysql://keystone:PASSWD#controller/keystone
the name controller should be resolved to whatever mysql is bound to, I will add it to /etc/hosts like this if [mysqld]/bind-address in /etc/my.cnf is 10.1.1.100:
10.1.1.100 controller
make sure to uncomment log_file in keystone.conf to get what is happening.
I was facing similar issue.I followed below mentioned steps and openstack-keystone service got started.
Edit the /etc/keystone/keystone.conf file and complete the following actions:
In the [DEFAULT] section
[DEFAULT]
admin_token = ADMIN_TOKEN
In the [database] section
[database]
connection = mysql://keystone:KEYSTONE_DBPASS#controller/keystone
In the [token] section, configure the UUID token provider and SQL driver
[token]
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
In the [revoke] section
[revoke]
driver = keystone.contrib.revoke.backends.sql.Revoke
After making above changes populate the Identity service database using command
su -s /bin/sh -c "keystone-manage db_sync" keystone
Start the openstack-keystone service using below command
systemctl start openstack-keystone

Importing files from PostgreSQL to R

I have a large data-set and I will preform some analysis in R software.
While I could not import the data properly to R.
I get this error:
Error in postgresqlNewConnection(drv, ...) : RS-DBI driver: (could not connect User#local on dbname "Intel"
I have used PostgreSQL to open data and somehow manage it. How can I import the existing data in the PostgreSQL to the R software?
drv <- dbDriver("PostgreSQL")
con <- dbConnect(drv, host='localhost', port='5432', dbname='Swiss',
user='postgres', password='123456')
Moreover, "RPostgreSQL" package in R should be installed.
Try the R package RPostgreSQL http://cran.r-project.org/web/packages/RPostgreSQL/ .
You can see how to use it in http://code.google.com/p/rpostgresql/ .
Example:
library(RPostgreSQL)
drv <- dbDriver("PostgreSQL") ## loads the PostgreSQL driver
con <- dbConnect(drv, dbname="R_Project") ## Open a connection
rs <- dbSendQuery(con, "select * from R_Users") ## Submits a statement
fetch(rs,n=-1) ## fetch all elements from the result set
dbGetQuery(con, "select * from R_packages") ## Submit and execute the query
dbDisconnect(con) ## Closes the connection
dbUnloadDriver(drv) # Frees all the resources on the driver
You have to configure two things on the PostgreSQL server before you are able to connect remotely. This is a instruction how to configure this under Linux:
1. Find and configure postgresql.conf to allow the TCP service to accept connections from any host, not only localhost
find / -name "postgresql.conf"
In my linux OS the file is locate in /etc/postgresql/9.6/main/, so I modify it there. Add the line "listen_addresses = '*'" as follows:
/etc/postgresql/9.6/main/postgresql.conf
#listen_addresses = 'localhost' # what IP address(es) to listen on;
# insert the following line
listen_addresses = '*'
2. Find and configure pg_hba.conf to allow to connect with a client from any host
sudo find / -name "pg_hba.conf"
In my linux OS the file is locate in /etc/postgresql/9.6/main/, so I modify it there. Add the line "host all all 0.0.0.0/0" as follows:
sudo nano /etc/postgresql/9.6/main/pg_hba.conf
# Put your actual configuration here
# ----------------------------------
#
# If you want to allow non-local connections, you need to add more
# "host" records. In that case you will also need to make PostgreSQL
# listen on a non-local interface via the listen_addresses
# configuration parameter, or via the -i or -h command line switches.
#
# insert the following line
host all all 0.0.0.0/0 trust
3. Stop and start the server
sudo service postgresql stop
sudo service postgresql start
4. Connect with you client, now it should work.
GOOD LUCK!

Resources