On my virtual server (Win Server 2012R2) I run some TCL progams with sqlite3 database.
All parts of TCL, sqlite, database (~25.000 items) and code are located on one shared folder C:\data that I can also access from my local PC (Win10) as drive S:. Some more programs work in the same manner, they are scheduled to run in the night.
The code, now competed:
console show
set auto_path [file join [file dirname [info script]] "../lib"]
package require sqlite3
set datasource "./compdata.db3"
wm withdraw .
sqlite3 db $datasource
db eval {SELECT * FROM master_data;} {
puts "$x1, $x2, $x3, $x4" ; update
}
db close
puts stderr ">>>Ok<<<"
exit
runs perfect locally on the server and started from PC.
Start command on server:
call c:\data\tcl858\bin\wish85.exe databasetest.t85
Start command on local PC:
call s:\tcl858\bin\wish85.exe databasetest.t85
If I add ORDER BY it runs only when I start it from PC.
db eval {SELECT * FROM master_data ORDER BY x1;} {
puts "$x1, $x2, $x3, $x4" ; update
}
Started on server it gets an error message:
Unable to open database file while executing "db eval {SELECT * FROM
master_data ORDER BY x1;} { puts "$x1, $x2, $x3, $x4" ; update }".
Related
I'm trying to run airflow with Azure SQL database as backend using mssql+pyodbc connection string(all relevant drivers have been installed).
while airflow is able to connect to DB and create tables, i.e, airflow initdb runs successfully, I'm facing issues while running airflow scheduler, as a result, the tasks triggered are always in "running" state.
This is the error I get while running airflow scheduler:
*sqlalchemy.exc.ProgrammingError: (pyodbc.ProgrammingError) ('42000', "[42000] [Microsoft][ODBC Driver 17 for SQL Server][SQL Server]Incorrect syntax near '1'. (102) (SQLExecDirectW)")
[SQL: SELECT dag.dag_id AS dag_dag_id
FROM dag
WHERE dag.is_paused IS 1 AND dag.dag_id IN (?)]
[parameters: ('example_http_operator',)]*
(Background on this error at: http://sqlalche.me/e/13/f405)
I'm using apache-airflow==1.10.11.
If you were able to run airflow + azure SQL DB with any configuration please feel free to jump in.
I found a document and talk the configuration about run airflow + azure SQL DB. Maybe it's helpful for you.
Ref: Setting up Airflow on Azure & connecting to MS SQL Server
This post also give some configurations about it: Apache Airflow - Connection issue to MS SQL Server using pymssql + SQLAlchemy
For MSSQL as backend DB, there is workaround in Airflow#10713. I using apache-airflow==1.10.15 and solved same error as yours.
The command suggested is attached, but I use vi update instead of run sed command.
RUN sed -i 's/import copy/import copy,sqlalchemy/g' /usr/local/lib/python3.6/site-packages/airflow/models/dag.py \ && sed -i 's/DagModel.is_paused.is_(True)/DagModel.is_paused == sqlalchemy.sql.expression.true()/g' /usr/local/lib/python3.6/site-packages/airflow/models/dag.py
We are using MySQL in Aurora cluster
We have 2 instances - master and slave.
We are working with spring transactions on top of a c3po connection pool.
We are using mariadb jdbc driver (version 2.2.3).
Our url looks like this -
jdbc:mysql:aurora:myclaster-cluster.cluster-xxxxxx.us-east-1.rds.amazonaws.com:3306/db?rewriteBatchedStatements=true
When testing failover; every few failovers we get into a state of using a read only connection -
Caused by: org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException for SQL [INSERT INTO a (a1, a2, a3, a4) VALUES (?, ?, ?, ?) on duplicate key update ]; SQL state [HY000]; error code [1290]; (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement; nested exception is java.sql.SQLException: (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:645)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:866)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:927)
at org.springframework.jdbc.core.JdbcTemplate.update(JdbcTemplate.java:937)
at com.persistence.impl.MyDao.insert(MyDao.java:52)
at org.springframework.scheduling.quartz.QuartzJobBean.execute(QuartzJobBean.java:75)
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
... 1 common frames omitted
Caused by: java.sql.SQLException: (conn=7) The MySQL server is running with the --read-only option so it cannot execute this statement
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:198)
at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.getException(ExceptionMapper.java:110)
at org.mariadb.jdbc.MariaDbStatement.executeExceptionEpilogue(MariaDbStatement.java:228)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.executeInternal(MariaDbPreparedStatementClient.java:216)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.execute(MariaDbPreparedStatementClient.java:150)
at org.mariadb.jdbc.MariaDbPreparedStatementClient.executeUpdate(MariaDbPreparedStatementClient.java:183)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeUpdate(NewProxyPreparedStatement.java:410)
at org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:873)
at org.springframework.jdbc.core.JdbcTemplate$2.doInPreparedStatement(JdbcTemplate.java:866)
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:629)
... 9 common frames omitted
How can we force the driver to return connections only to the master instance? Is there a way to force aurora to close all open connection upon failover?
Thanks
We solved the problem by implementing an ExceptionInterceptor, in which we closed the connection, forcing the pool to create a new one.
This workround is relevant for mysql-connector-java 5.1.47
#Override
public SQLException interceptException(SQLException sqlEx, Connection conn) {
if (sqlEx.getErrorCode() == READ_ONLY_ERROR_CODE) {
log.warn("Got read only exception closing the connection {} ", sqlEx.getMessage());
closeConnection(conn);
}
return sqlEx;
}
I am running meteor application in a localhost on port 3000 and I can't connect R with my MongoDB. ( I checked this code for MongoDB running on port 27017 without meteor - just pure database and it's working properly). Meteor create an own database which is calling meteor and inside are my collections ( including images in this sample).
library(RMongo)
mongo<- mongoDbConnect("meteor", host="127.0.0.1", port=3000) #error
#mongo<- mongoDbConnect("meteor", host="127.0.0.1", port=27017)# - that's work
output <- dbGetQuery(mongo, 'images', '{}')
print(output)
I have this error:
error in '.jcall(rmongo.object#javaMongo, "S", "dbGetQuery", collection, ':
com.mongodb.MongoException$Network: Read operation to server /127.0.0.1:3000 failed on database meteor
dbGetQuery ... dbGetQueryForKeys -> dbGetQueryForKeys -> .jcall -> .jcheck -> .Call
EDIT:
the same problem with any other R packages like mongolite
No suitable servers found (serverSelectionTryOnce set): [connection closed calling ismaster on 'localhost:3000']
In case that somebody else will have this problem:
type:
meteor mongo -U
to get URL address of your db, and now copy paste to host:""
proenv>proserve dbname -S 2098 -H hostname -B 10000
OpenEdge Release 11.6 as of Fri Oct 16 19:02:26 EDT 2015
11:00:35 BROKER This broker will terminate when session ends. (5405)
11:00:35 BROKER The startup of this database requires 46Mb of shared memory. Maximum segment size is 1024Mb.
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
11:00:35 BROKER : Removed shared memory with segment_id: 39714816 (16869)
11:00:35 BROKER ** This process terminated with exit code 1. (8619)
I am getting the above error when I tried to start the progress database...
This is the problem:
11:00:35☻ BROKER 0: dbname is a void multi-volume database. (613)
My guess is, you have just created the DB using prostrct create. You need to procopy an empty db into your db so that if has the schema tables.
procopy empty yourdbname
See: http://knowledgebase.progress.com/articles/Article/P7713
The database is void it means, it does not have any metaschema.
First create your database using .st file (use prostrct create) than
copy meta schema table using emptyn.
Ex : procopy emptyn urdbname.
Then try to start your database.
I connect to 8 different unix servers from Windows, using connection type 'SSH' in putty. I use the same username/password for each server.
Currently when I need to change passwords (every 60 days), I need to open putty, select the session I want to connect to, type my current password (in the putty window that opens), type "passwd", enter my current password, and then enter my new password.
Then I exit and repeat the process 7 times.
How can I convert this to an automated process where I simply need to supply a script/batch process with my old and new password?
Here is how I automated the process:
Download and install ActiveTCL Community Edition (download the 32 bit version, even if you are on 64 bit windows, as the 64 bit version does not have "Expect" which is what you need to run the automated script)
Open the tclsh85 executable that was created by the install
Run this command "teacup install Expect" (note, this is case sensitive. You may need to setup special http settings if you receive an error and/or are on vpn or using a proxy)
Download Putty's "plink.exe" and either place it in the bin directory of ActiveTCL (default install directory is "C:\Tcl\bin") or alter your "Path" environment variable to include the path to this executable (wherever you downloaded plink.exe). This is the command-line version of Putty which your script will use.
Anywhere on your drive, create a text file named "servers.txt" with a list of the servers (one per line). They should all share the same password, as the script will login to all of them with the same password (that you supply), and change the password to the one you supply.
In the same directory as "servers.txt" create a new text file called "ChangePassword.tcl" (or whatever you want to call it, but be sure its file type is "tcl"). Right click the file and edit in notepad (or whatever text editor you prefer) and paste this script in it.
package require Expect
exp_log_user 0
set exp::nt_debug 1
proc changepw {host user oldpass newpass} {
spawn plink $host
log_user 0
expect {
"login as: " { }
}
exp_send "$user\r"
expect "sword: "
exp_send "$oldpass\r"
expect "\$ "
exp_send "passwd\r"
expect "sword: "
exp_send "$oldpass\r"
expect "sword: "
exp_send "$newpass\r"
expect "sword: "
exp_send "$newpass\r"
set result $expect_out(buffer)
exp_send "exit\r"
return $result
}
label .userlbl -text "Username:"
label .oldpasslbl -text "\nOld Password: "
label .newpasslbl -text "\nNew Password: "
set username "username"
entry .username -textvariable username
set oldpassword "oldpassword"
entry .oldpassword -textvariable oldpassword
set newpassword "newpassword"
entry .newpassword -textvariable newpassword
button .button1 -text "Change Password" -command {
set fp [open "servers.txt" r]
set file_data [read $fp]
close $fp
set data [split $file_data "\n"]
foreach line $data {
.text1 insert end "Changing password for: $line\n"
set output [changepw $line $username $oldpassword $newpassword]
.text1 insert end "$output\n\n"
}
}
text .text1 -width 50 -height 30
pack .userlbl .username .oldpasslbl .oldpassword .newpasslbl .newpassword .button1 .text1
Save the script and then launch the ChangePassword.tcl file.
Here is a picture of what it looks like when you open the ChangePassword.tcl file:
The rest should be self explanatory. Note the program does not output when your password change was successful but it will tell you when it fails. Also note, this was my first tcl script (and first time using Expect) so the script is by no means "optimized" and could probably be improved but it gets the job done. Feel free to edit, or make suggestions/improvements.
Sounds like you want Expect, an extension of TCL that can mimic typing at a keyboard for a console application. See the examples for how to do this.
Now there is something you've written that worries me:
I connect to 8 different unix servers, using connection type 'SSH' in putty. I use the same username/password for each server.
Why aren't you using SSH keys for automating the logon?
Great article! Just elaborating on step-3. Please note the commands to provide Proxy server information in case "teacup install Expect" fails due to connectivity issue:
%teacup install Expect
Resolving Expect ... Not found in the archives.
...
Aborting installation, was not able to locate the requested entity.
child process exited abnormally
% teacup list teacup
0 entities found
Problems which occurred during the operation:
* http://teapot.activestate.com :
{connect failed connection refused} {can't read
"state(sock)": no such element in array while executing
"fileevent $state(sock) writable {}"} NONE
% teacup proxy "abcproxy.mycorp.com" 8080
Proxying through abcproxy.mycorp.com # 8080
% set http_proxy_user MyNetworkID
MyNetworkID
% set http_proxy_pass MyNetworkPassword
MyNetworkPassword
% teacup list teacup
entity name version platform
----------- ------ --------------- ----------
application teacup 8.5.16.0.298388 win32-ix86
----------- ------ --------------- ----------
1 entity found
% teacup install Expect
Resolving Expect ... [package Expect 5.43.2 win32-ix86 # http://teapot.activestate.com]
Resolving Tcl 8.4 -is package ... [package Tcl 8.6.1 _ ... Installed outside repository, probing dependencies]
Retrieving package Expect 5.43.2 win32-ix86 ...# http://teapot.activestate.com ...
Ok
Installing into C:/app/Tcl/lib/teapot
Installing package Expect 5.43.2 win32-ix86
%