Has anyone successfully exported their db connections from MySQL Workbench into HeidiSql?
Workbench's export gives me XML files (tools>config>backup connections) but apparently HeidiSql expects a .txt file to import.
Are the two just simply not compatible?
I have a large number of connections to various DBs and I'm not looking forward to having to manually set them all up in HeidiSql :(
Related
We have a requirement where one of the applications (AppA) will push files (sometimes 15K - 20K files in one go) to SFTP folder. Another Application (AppB) will be polling this folder location to poll the files and upon reading will push Ack file on the same SFTP in different folder. There have been issues with file lost, mismatch between files sent and Ack received. Now, we need to develop one mechanism (as Auditor) to monitor the SFTP location. This **SFTP is on Windows servers. Single file size will be less than 1 MB.
**
We are planning to adopt one of following approach:
Write one external utility in Java, which will keep on polling the SFTP location and download the file locally, read the content of it and store it in local DB for reconciliation.
Pro:
Utility will be a standalone utility no dependency on SFTP server as such (apart from reading the file)
Con:
In addition to AppB, this utility will also be making connection with SFTP and download the file, this may overload the SFTP server and might hamper the regular functioning of AppA and AppB
Write a Java utility/script and deploy it on SFTP server itself as scheduler or it can be configured to listen the respective folder. Upon reading the file locally on SFTP, this utility will call external API to post the content of file and store it in DB for reconciliation.
Pro:
There will be no overhead on SFTP server for connection and file download
File reading will be faster and almost Realtime (in case listener is used)
Con:
Java needs to be installed on SFTP server
This utility will call the external API, and in case of 15K - 20K files, it will slowdown the process of capturing the data and storing in DB
We are currently in design process, need your suggestions and any insight if anyone has implemented similar kind of mechanism.
I'm trying to set up a POS system on two pc locally
Please how do i change this setting to connect and save to the database remotely ?
db_host localhost
db_hostaddr 127.0.0.1
db_name pos
db_password 123
db_port 5432
db_user postgres
db_driver SQLite
data_dir data
data_db pos_data.sdb
update_server 127.0.0.1
update_user max
update_pass 123
This 'config file' suggests there at least two different DBMSes involved: PostgreSQL and SQLite. Surely there is more to it. A SQLite DB fits in a standalone file (+ temporary files). The PostgreSQL instance is probably running locally, hence localhost/127.0.0.1.
There is not enough information on that POS system. Maybe it's an open source project and there is some documentation available, if it is a proprietary application, maybe there is still documentation, otherwise you have to check with the vendor. Perhaps the source code (if available) has some useful comments too...
Actually it's impossible to advise without knowing your network setup. It's not as easy as moving the database to another machine. You have to know the IP address or host name of the server, and there should be a firewall tuned to allow connections to the database server.
Probably, you change db_host and db_hostaddr, and test. But it's not clear what update_server is and what it does. Nor is it clear what each database (PostgreSQL and SQLite) contains and how the application is structured.
You need to provide a lot more information if you want help.
I have an SSIS 2012 package that, among other things, needs to write to nine different tabs in each of twenty different Excel 2010 files.
In one of the data flow tasks, when running the package within Visual Studio 2012, I get an error during validation:
Error: 0xC0014020 at MyPackage, Connection manager "Excel Files Whatever": An ODBC error -1 has occurred.
Error: 0xC0014009 at MyPackage, Connection manager "Excel Files Whatever": There was an error trying to establish an Open Database Connectivity (ODBC) connection with the database server.
This data flow task that generates this error would be writing to six tabs in each of the Excel files if it worked. With fewer files (four) in a previous version of this SSIS package, it worked fine. Also, another data flow task in the same package that writes to the other three tabs of each Excel file also works fine. The two data flow tasks are using the same connection managers. The specific connection manager named in the error changes each time the package is run.
I enabled ODBC tracing, and I found the following error in the log:
DIAG [08004] [Microsoft][ODBC Excel Driver] Too many client tasks. (-1036)
I found some documentation about ODBC destinations, which reads in relevant part:
There is no limitation on the number of ODBC destination components that can run in parallel against the same table or different tables, on the same machine or on different machines (other than normal global session limits).
However, limitations of the ODBC provider being used may restrict the number of concurrent connections through the provider. These limitations limit the number of supported parallel instances possible for the ODBC destination. The SSIS developer must be aware of the limitations of any ODBC provider being used and take them into consideration when building SSIS packages.
OK, great, but:
What is the limit on parallel connections in the 32-bit Excel driver contained in the Microsoft Access Database Engine 2010 Redistributable? Or is that even the problem?
If there is a limit on parallel connections, how can I force SSIS to honor the limit when running the package, including during the validation phase?
As additional info, I have DelayValidation set to True on all the ODBC connection managers. I have ValidateExternalMetadata set to False on all the ODBC destinations because the files do not exist yet when starting the package (a Copy Files task creates all the files earlier in the package). The connection string for each of the connection managers is generated by an expression, but the result is of the form
Dsn=Excel Files;dbq=c:\MyWorkspace\Whatever-20130701-to-20130917.xlsx;defaultdir=c:\MyWorkspace;driverid=1046;fil=excel 12.0;maxbuffersize=2048;pagetimeout=5;
in which only the file and directory names change due to parameters used in the expression.
I'm having a locking problem where an SQLITE3 databse is permanently locked when created on an NFS file system. I have read that an option called nobrl can help this issue when the file system in question is CIFS. (its an option to the mount command).
From: http://linux.die.net/man/8/mount.cifs
nobrl
Do not send byte range lock requests to the server. This is
necessary for certain applications that break with cifs style
mandatory byte range locks (and most cifs servers do not yet support
requesting advisory byte range locks).
Is there any way to stop byte-range-lock requests in NFS if they occur, or am I running in the wrong direction by even thinking about this? I'm happy to change the mount command as was done for the CIFS solution.
I recommend to open you sqlite db by software with nolock parameter enabled, golang exg.:
sql.Open("sqlite3", "file:/media/R/Databases//your.db?nolock=1")
while /media/R is a mounted windows nfs-network-drive. Be carefull because you have to lock your db interactions by software otherwise you could corrupt your db, when accessing it simultaneously.
You can read more about sqlite parameters here:
https://www.sqlite.org/c3ref/open.html
i'd like to use the mysql odbc driver for connecting to my mysql database via my own app.
the problem is that it seems very unstable - i keep getting errors like:
[MySQL][ODBC 5.1 Driver][mysqld-5.5.8]MySQL server has gone away
it seems to be something like a session timeout.
so here's my questions:
- what is causing those errors?
- is there a way to fix it for getting stable connections?
- is it recommended at all using it for coding windows software?
thanks
My guess is you're opening the connection once and leaving it open. At some point, the connection either times out, or some network hiccup is causing the connection to be invalid/closed. The best way to do database access is to open the connection when you need to do work, then close it. Or, alternatively, change your code to support re-connecting when you encounter an error.
Based on discussion in the comments below, I would suggest dumping the access database to a csv file, then using something like PHPMySql to import the data into MySQL.
You can use the BigDump tool to import large databases dumps into MySQL. (via this site)
There are commercial alternatives out there -
OpenLink Single-tier ODBC Driver for MySQL