create tables/rows in ram using sqlite? - sqlite

I remember once i had temporary data used on a webpage. I used php and mysql to create a table that stored visitors ips and the port they requested. I didnt care how long they lasted because after 10mins or an hour the data would no longer be relevant. I cant remember how i did it but using sqlite instead...
How do i create a table for rows meant to be stored in ram only for a limited amount of time (a few minutes) using sqlite? Using C# .NET

You could use the following connection string: Data Source=:memory:;Version=3;New=True. It is important to note that a memory database exists as long as the connection remains open.

Related

Query execution is slow in Teradata FastExport compared to Default Connection

I have a small doubt on how to use fastexport in production environment.
I have a million rows file and am using fastload to upload it to TD which is taking almost a minute and doing some business logic and when i use fastexport for downloading the table to my local machine/node, it is taking very long time compared to the default version.
Connection Uri i am using is :
jdbc:teradata://server/TMODE=ANSI,CHARSET=UTF8,TYPE=FASTEXPORT
Am i missing anything here. Please suggest.
Thanks

Efficient method to move data from Oracle (SQL Developer) to MS SQL Server

Daily, I query a few tables in SQL Developer, filtering to prior day activity, adding column to date stamp the data, then export to xlsx. Then I manually import each file to a MS SQL Server via SQL Server Import and Export Wizard. Takes many clicks, much waiting...
I'm essentially creating an archive in SQL Server, the application I'm querying overwrites data daily. I'm not a DBA of either database, I use the archived data to do validations and research.
It's tough to get my org to provide additional software, I've been trying to make this work via SQL Developer, SSMS Express ed, and other standard tools.
I'm looking to make this reasonably automated, either via scripts, scheduled tasks, etc. Appreciate suggestions that would work on my current situation, but if that isn't reasonable, and there's a very reasonable alternative, I can go back to the org to request software/access/assistance.
You can use SSIS to import the data directly from Oracle to SQL Server, unless you need the .xlsx files for another purpose. You can also export from Oracle to these, then load to SQL Server from these files if you do need the files. For the date stamp column, a Derived Column can be added within a Data Flow Task using the SSIS GETDATE() function for a timestamp in order to achieve the same result. This function returns a timestamp, and if only the date is necessary the (DT_DBDATE) function can cast it to a date data type that's compatible with this data type of SQL Server. Once you have the SSIS package configured, you can schedule in to run at regular intervals as a SQL Agent job. I'd also recommend installing the SSIS catalog (SSISDB) and using this the source to run the packages from. The following links shed more light on these areas.
SSIS
Connecting to Oracle from SSIS
Data Flow Task
Derived Column Transformation
Creating SQL Server Agent Jobs for SSIS packages
SSIS Catalog
Another option that you may consider (if it is supported in SQL Express) is using the BCP utility, which can be run from command line.
The BCP utility allows you to bulk copy the data from a delimited text file into a SQL Server table.
If you go this approach, things to consider:
Number of Columns in the source file need to match the number of columns in the destination
Data types must match (or be comparable)
Typically, empty strings will be converted to nulls, so you will need to consider if the columns are nullable.
(to name a few - if you want to delve deeper, you might also need to look at custom delimiters between fields and records. Don't forget, commas and line feeds are still valid characters in char type fields).
Anyhow, maybe it will work for you, maybe not. Sure, you might still have to deal with the exporting of the data from Oracle, but it might ease the pain getting the data in.
Have a read:
https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view=sql-server-2017

How to transfer data from SQL Server to Informix and vice versa

I want to transfer tables data from SQL server to Informix and vice versa.
The transferring should be run scheduled and sometimes when the user make a specific action.
I do this operation through delete and insert transactions and it takes along long time through the web between 15 minute to 30 minute.
How to do this operation in easy way taking the performance in consideration?
Say I have
Vacation table in SQL Server and want to transfer all the updated data to the Vacation table in Informix.
and
Permission table in Informix and want to transfer all the updated data to the Permission table in SQL Server.
DISCLAIMER: I am not an SQL Server DBA. However, I have been an Informix DBA for over ten years and can make some recommendations as to its performance.
Disclaimer aside, it sounds like you already have a functional application, but the performance is a show-stopper and that is where you are mainly looking for advice.
There are some technical pieces of information that would be helpful to know, but in their absence, I'm going to make the following assumptions about your environment and application. Please comment or edit your question if I am wrong on any of these.
Database server versions. From the tags, it appears you are using SQL server 2012. However, I cannot determine the Informix server and version. I will assume you are running at least IDS 11.50 or greater.
How the data is being exchanged currently. Are you connecting directly from your .NET application to Informix? I would assume that is the case with SQL Server and will make the same assumption for your Informix connection as well.
Table structures. I assume you have proper indexing on the tables. On the Informix side, dbschema -d *dbname* -t *tablename* will give the basic schema.
If you haven't tried exporting data to CSV and as long as you don't have any compliance concerns doing this, I would suggest loading the data from a comma-delimited file. (Informix normally deals with pipe-delimited files, so you'll either need to adjust the delimiter on the SQL Server side to a pipe | or on the Informix import side). On the Informix end, this would be a
LOAD FROM 'source_file_from_sql_server' DELIMITER '|' INSERT INTO vacation (field1, field2, ..)
For reusability, I would recommend putting this in a stored procedure. Just wrap that load statement inside a BEGIN WORK; and COMMIT WORK; to keep your transactional integrity. Michał Niklas suggested some ways to track changes. If there is any correlation between the transfer of data to the vacation table in Informix and the permission table back in SQL Server, I would propose another option, which is adding a trigger to the vacation table so that you write all new values to a staging table.
With the import logic in a stored procedure, you can fire the import on demand:
EXECUTE PROCEDURE vacation_import();
You also mentioned the need to schedule the import, which can be accomplished with Informix's "dbcron". Using this feature, you'll create a scheduled task that executes vacation_import() periodically as well. If you haven't used this feature before, using OAT will be helpful. You will also want to do some housekeeping with the CSV files. This can be addressed with the system() call, which you can make from stored procedures in Informix.
Some ideas:
Add was_transferred column to source tables setting its default value to 0 (you can use 0/1 instead of false/true).
From source table select data with was_transferred=0.
After transferring data update selected source row, set its was_transferred to 1.
Make table syncro_info with fields like date_start and date_stop. If you discover that there is record with date_stop IS NULL it will mean that you are tranferring data. This will protect you against synchronizing data twice.

SQlite3 — SQL datetime stamps

Am I right in thinking that SQL does intrinsically store a date/time stamp for each commit? i.e., that I have to allow for recording this information as part of my schema design?
It will take a certain amount of space to store this information explicitly (using CURRENT_TIMESTAMP or my own timestamp), and if there's a way of accessing comparable information in some internal database setting, I'd do that instead. I'm working with SQLite3 at the moment.
SQLite does not have any internal time stamps.
(SQLite database files do not even have any record of transaction once they are committed.)

Aggregating multiple distributed MySQL databases

I have a project that requires us to maintain several MySQL databases on multiple computers. They will have identical schemas.
Periodically, each of those databases must send their contents to a master server, which will aggregate all of the incoming data. The contents should be dumped to a file that can be carried via flash drive to an internet-enabled computer to send.
Keys will be namespace'd, so there shouldn't be any conflict there, but I'm not totally sure of an elegant way to design this. I'm thinking of timestamping every row and running the query "SELECT * FROM [table] WHERE timestamp > last_backup_time" on each table, then dumping this to a file and bulk-loading it at the master server.
The distributed computers will NOT have internet access. We're in a very rural part of a 3rd-world country.
Any suggestions?
Your
SELECT * FROM [table] WHERE timestamp > last_backup_time
will miss DELETEed rows.
What you probably want to do is use MySQL replication via USB stick. That is, enable the binlog on your source servers and make sure the binlog is not thrown away automatically. Copy the binlog files to USB stick, then PURGE MASTER LOGS TO ... to erase them on the source server.
On the aggregation server, turn the binlog into an executeable script using the mysqlbinlog command, then import that data as an SQL script.
The aggregation server must have a copy of each source servers database, but can have that under a different schema name as long as your SQL all does use unqualified table names (does never use schema.table syntax to refer to a table). The import of the mysqlbinlog generated script (with a proper USE command prefixed) will then mirror the source servers changes on the aggregation server.
Aggregation across all databases can then be done using fully qualified table names (i.e. using schema.table syntax in JOINs or INSERT ... SELECT statements).

Resources