Export All query results to one file vs separate in MariaDB? - mariadb

I am querying hundreds of different databases and wondering if there is anyway to export the results into one file vs one file per result. The result view is appended into one tab and I was hoping "export all" meant all results into one file vs hundreds of files. Is this simply because I am querying different databases or is there a workaround? Most solutions I've read about are for exporting for back up and not results of a query. Thank you for any help.

Related

Combine files records with repeated ID in BizTalk

I have another case where I don't know how to find a solution with BizTalk.
I have this two flat files (in real there are 9 files to combine) and the output must be like shown in the picture:
How can I combine files which ID repeat several times in the main file.
In the below picture, the main file is "People". Is there way to do this without writing any code in BizTalk, or must I store this data in SQL DB after that i join them with a stored procedure?
Can you help me lay-out the steps I need to take, because I know how to combine files together but that is without the repeated ID's.

How to Combine multiple files in BizTalk?

I have multiple flatfiles (CSV) (with multiple records) where files will be received randomly. I have to combine them (records) with unique ID fields.
How can I combine them, if there is no common unique field for all files, and I don't know which one will be received first?
Here are some files examples:
In real there are 16 files.
Fields and records are much more then in this example.
I would avoid trying to do this purely in XSLT/BizTalk orchestrations/C# code. These are fairly simple flat files. Load them into SQL, and create a view to join your data up.
You can still use BizTalk to pickup/load the files. You can also still use BizTalk to execute the view or procedure that joins the data up and sends your final message.
There are a few questions that might help guide how this would work here:
When do you want to join the data together? What triggers that (a time of day, a certain number of messages received, a certain type of message, a particular record, etc)? How will BizTalk know when it's received enough/the right data to join?
What does a canonical version of this data look like? Does all of the data from all of these files truly get correlated into one entity (e.g. a "Trade" or a "Transfer" etc.)?
I'd probably start with defining my canonical entity, and then look towards the path of getting a "complete" picture of that canonical entity by using SQL for this kind of case.

Export SugarCRM Reports Setup

I'd like to export the actual data about a report (as opposed to the report's results). Can this be done with SugarCRM? How can I copy a report from one SugarCRM instance to another?
Report data (i.e. results) can be exported as a PDF or CSV. However, the report configuration, i.e. the filters, display columns, etc. cannot be exported through the interface. I believe the reason for this is that every system has its own set of fields and users that might be associated.
If you look in the database, though, you'll find report configuration is stored in the table saved_reports. I don't see why you couldn't run something like mysqldump mydatabase saved_reports > reports.sql and import that (via SQL) to a clone of your system. Just be sure that you have 100% users, teams and fields duplicated in the second system or you can expect issues.

Best format to store incremental data in regularly using R

I have a database that is used to store transactional records, these records are created and another process picks them up and then removes them. Occasionally this process breaks down and the number of records builds up. I want to setup a (semi) automated way to monitor things, and as my tool set is limited and I have an R shaped hammer, this looks like an R shaped nail problem.
My plan is to write a short R script that will query the database via ODBC, and then write a single record with the datetime, the number of records in the query, and the datetime of the oldest record. I'll then have a separate script that will process the data file and produce some reports.
What's the best way to create my datafile, At the moment my options are
Load a dataframe, add the record and then resave it
Append a row to a text file (i.e. a csv file)
Any alternatives, or a recommendation?
I would be tempted by the second option because from a semantic point of view you don't need the old entries for writing the new ones, so there is no reason to reload all the data each time. It would be more time and resources consuming to do that.

what is the best way to export data from Filemaker Pro 6 to Sql Server?

I'm migrating/consolidating multiple FMP6 databases to a single C# application backed by SQL Server 2008. the problem I have is how to export the data to a real database (SQL Server) so I can work on data quality and normalisation. Which will be significant, there are a number of repeating fields that need to be normalised into child tables.
As I see it there are a few different options, most of which involve either connecting to to FMP over ODBC and using an intermediate to copy the data across (either custom code or MS Acess linked tables), or, exporting to flat file format (CSV with no header or xml) and either use excel to generate insert statements or write some custom code to load the file.
I'm leaning towards writing some custom code to do the migration (like this article does, but in C# instead of perl) over ODBC, but I'm concerned about the overhead of writing a migrator that will only be used once (as soon as the new system is up the existing DB's will be archived)...
a few little joyful caveats: in this version of FMP there's only one table per file, and a single column may have multi-value attributes, separated by hex 1D, which is the ASCII group separator, of course!
Does anyone have experience with similar migrations?
I have done this in the past, but using MySQL as the backend. The method I use is to export as csv or merge format and them use the LOAD DATA INFILE statement.
SQL Server may have something similar, maybe this link would help bulk insert

Resources