Accessing sqlite db from Visual Foxpro vfp9 - sqlite

I am using Visual Foxpro (Vfp9).
I have stored sqlite db file on my server.
I want to access it from vfp like php does.
How I can achieve this.
Please help.
Thanks in advance.

First you need to get and install SQLite ODBC driver if you haven't done so yet:
SQLite ODBC driver download
Then it is easy. You simply use one of the techniques accessing external data (SQL pass through, Remote views, Cursor adapter). ie: With SQL Pass Through:
Local dbName, handle, lcSQL, ix, cSample
dbName = 'd:\temp\MyDb.s3db'
handle = Sqlstringconnect( Textmerge("driver={SQLite3 ODBC Driver};Database=<< m.dbName >>") )
SQLExec(m.handle, "create table mySampleTable (id int primary key, dummy varchar(50))")
TEXT to lcSQL noshow
insert into mySampleTable
(id, dummy)
values
(?m.ix, ?m.cSample)
ENDTEXT
For ix = 1 To 10
cSample = 'Dummy no ' + Ltrim(Str(m.ix))
SQLExec(m.handle, m.lcSQL)
Endfor
SQLExec(m.handle, 'select * from mySampleTable','sample')
SQLDisconnect(0)
Select Sample
Browse
Note: Normally when inserting in a loop, you would use SQLPrepare() and transaction for performance. I didn't care about that here.

Related

Retrieve a specific DB project and its tables from SQL Server in R

I am new to using SQL Server from RStudio. I am connected to SQL Server from RStudio and the server has several different projects listed in the below image. For this work I am using odbc library. I am trying to retrieve the tables of a specific project(Project_3960). I have tried dbListTables(conn,"Project_3960") but this command retrieve the tables from all the projects listed in the below Picture. I just want to retrieve the table which are listed in dbo in Project_3690.
The first picture is from RStudio and the second picture is from SQL Management Studio to show the structure of the folders, in case for executing SQL Query.
Thanks
Click on the arrow to the left of the dbo object under Project_3690, and it should show you the tables you have access to. If it does not, then you have a permissions problem and will need to talk with the DBA. That allows you to see them via the GUI. In fact, if you don't already know the names of the tables you should be accessing (such as to follow my code below), then this is the easiest, as they are already filtering out the system and other tables that obscure what you need.
To see them in R code, then dbListTables(conn) will show you all tables, including the ones in the Connections pane I just described but also a lot of system and otherwise-internal tables that you don't need. On my SQL Server instance, it returns over 600 tables, so ... you may not want to do just that, but you can look for specific tables.
For example, if you know you should have tables Table_A and Table_B, then you could do
alltables <- dbListTables(conn)
grep("table_", alltables, value = TRUE, ignore.case = TRUE)
to see all of the table names with that string in its name.
If you do not see tables that you know you need to access, then it is likely that your connection code did not include the specific database, in which case you need something like:
conn <- dbConnect(odbc(), database="Project_3690", uid="...", pwd="...",
server="...", driver = "...")
(Most fields should already be in your connection code, don't use literal ... for your strings.)
One can use a system table to find the other tables:
DBI::dbGetQuery(conn, "select * from information_schema.tables where table_type = 'BASE TABLE' and table_schema = 'dbo'")
# TABLE_CATALOG TABLE_SCHEMA TABLE_NAME TABLE_TYPE
# 1 Project_3690 dbo Table_A BASE TABLE
# 2 Project_3690 dbo Table_B BASE TABLE
# 3 Project_3690 dbo Table_C BASE TABLE
(Notional output but representative of what it should look like.)
Its not quite direct to retrieve the data from SQL server using RStudio when you have different schemas and all are connected to the server. It is easy to view the connected Databases with schema in SQL Server Management Studio but not in RStudio. The easiest way while using Rodbc is to use dot(.) operator and its easy to retrieve the tables of a specific data base is by using "." with dbGetQuery function. I tried dbGetQuery(conn, "select * from project_3690.dbo.AE_ISD ") and it works perfectly fine.

How to use DBI::dbConnect() to read and write tables from multiple databases

I have a Netezza SQL server I connect to using DBI::dbConnect. The server has multiple databases we will name db1 and db2.
I would like to use dbplyr as much as possible and skip having to write SQL code in RODBC::sqlQuery(), but I am not sure how to do the following:.
1) How to read a table in db1, work on it and have the server write the result into a table in db2 without going through my desktop?
2) How to do a left join between a table in db1 and another in db2 ?
It looks like there might be a way to connect to database ="SYSTEM" instead of database = "db1" or "db2", but I am not sure what a next step would be.
con <- dbConnect(odbc::odbc(),
driver = "NetezzaSQL",
database = "SYSTEM",
uid = Sys.getenv("netezza_username"),
pwd = Sys.getenv("netezza_password"),
server = "NETEZZA_SERVER",
port = 5480)
I work around this problem on SQL server using in_schema and dbExecute as follows. Assuming Netezza is not too different.
Part 1: shared connection
The first problem is to connect to both tables via the same connection. If we use a different connection then joining the two tables results in data being copied from one connection to the other which is very slow.
con <- dbConnect(...) # as required by your database
table_1 <- dplyr::tbl(con, from = dbplyr::in_schema("db1", "table_name_1"))
table_2 <- dplyr::tbl(con, from = dbplyr::in_schema("db2.schema2", "table_name_2"))
While in_schema is intended for passing schema names you can also use it for passing the database name (or both with a dot in between).
The following should now work without issue:
# check connection
head(table_1)
head(table_2)
# test join code
left_join(table_1, table_2, by = "id") %>% show_query()
# check left join
left_join(table_1, table_2, by = "id") %>% head()
Part 2: write to datebase
A remote table is defined by two things
The connection
The code of the current query (e.g. the result of show_query)
We can use these with dbExecute to write to the database. My example will be with SQL server (which uses INTO as the keyword, you'll have to adapt to your own environment if the sql syntax is different).
# optional, extract connection from table-to-save
con <- table_to_save$src$con
# SQL query
sql_query <- paste0("SELECT *\n",
"INTO db1.new_table \n", # the database and name you want to save
"FROM (\n",
dbplyr::sql_render(table_to_save),
"\n) subquery_alias")
# run query
dbExecute(con, as.character(sql_query))
The idea is to create a query that can be executed by the database that will write the new table. I have done this by treating the existing query as a subquery of the SELECT ... INTO ... FROM (...) subquery_alias pattern.
Notes:
If the sql query produced by show_query or sql_render would work when you access the database directly then the above should work (all that changes is the command is arriving via R instead of via the sql console).
The functions I have written to smooth this process for me can be found on here. They also include appending, deleting, compressing, indexing, and handling views.
Writing a table via dbExecute will error if the table already exists in the database, so I recommend checking for this first.
I use this work around in other places, but inserting the database name with in_schema has not worked for creating views. To create (or delete) a view I have to ensure the connection is to the database where I want the view.

Export Multpile Tables Data as Insert Statements in to single file Oracle DB [duplicate]

The only thing I don't have an automated tool for when working with Oracle is a program that can create INSERT INTO scripts.
I don't desperately need it so I'm not going to spend money on it. I'm just wondering if there is anything out there that can be used to generate INSERT INTO scripts given an existing database without spending lots of money.
I've searched through Oracle with no luck in finding such a feature.
It exists in PL/SQL Developer, but errors for BLOB fields.
Oracle's free SQL Developer will do this:
http://www.oracle.com/technetwork/developer-tools/sql-developer/overview/index.html
You just find your table, right-click on it and choose Export Data->Insert
This will give you a file with your insert statements. You can also export the data in SQL Loader format as well.
You can do that in PL/SQL Developer v10.
1. Click on Table that you want to generate script for.
2. Click Export data.
3. Check if table is selected that you want to export data for.
4. Click on SQL inserts tab.
5. Add where clause if you don't need the whole table.
6. Select file where you will find your SQL script.
7. Click export.
Use a SQL function (I'm the author):
https://github.com/teopost/oracle-scripts/blob/master/fn_gen_inserts.sql
Usage:
select fn_gen_inserts('select * from tablename', 'p_new_owner_name', 'p_new_table_name')
from dual;
where:
p_sql – dynamic query which will be used to export metadata rows
p_new_owner_name – owner name which will be used for generated INSERT
p_new_table_name – table name which will be used for generated INSERT
p_sql in this sample is 'select * from tablename'
You can find original source code here:
http://dbaora.com/oracle-generate-rows-as-insert-statements-from-table-view-using-plsql/
Ashish Kumar's script generates individually usable insert statements instead of a SQL block, but supports fewer datatypes.
I have been searching for a solution for this and found it today. Here is how you can do it.
Open Oracle SQL Developer Query Builder
Run the query
Right click on result set and export
http://i.stack.imgur.com/lJp9P.png
You might execute something like this in the database:
select "insert into targettable(field1, field2, ...) values(" || field1 || ", " || field2 || ... || ");"
from targettable;
Something more sophisticated is here.
If you have an empty table the Export method won't work. As a workaround. I used the Table View of Oracle SQL Developer. and clicked on Columns. Sorted by Nullable so NO was on top. And then selected these non nullable values using shift + select for the range.
This allowed me to do one base insert. So that Export could prepare a proper all columns insert.
If you have to load a lot of data into tables on a regular basis, check out SQL Loader or external tables. Should be much faster than individual Inserts.
You can also use MyGeneration (free tool) to write your own sql generated scripts. There is a "insert into" script for SQL Server included in MyGeneration, which can be easily changed to run under Oracle.

Getting datatypes from Paradox DB over ODBC into SQLite [Delphi]

I'm connecting to a .dbf using ODBC in Delphi using FireDAC. I've setup an ODBC connection, dBase 5.0, using the 32-bit Driver do Microsoft dBase (.dbf) driver.
In my IDE (Rad Studio 10.1 Berlin), I've setup the ODBC connection as a data source. The ODBCAdvanced connectiong string is DefaultDir=%s;DriverId=533;MaxBufferSize=2048;PageTimeout=5, where %s is the correct directory.
I managed to copy a table's structure to a SQLite db using TFields (code roughly as follows).
FieldNames := TStringList.Create;
PDOXTable.GetFieldNames(FieldNames);
FieldNames.Delimiter := ';';
FieldList := TList<TField>.Create;
PDOXTable.GetFieldList(FieldList, FieldNames.DelimitedText);
TempTable := TFDTable.Create(nil);
TempTable.Connection := TempConn;
TempTable.TableName := DataTable.TableName;
for I := 0 to FieldList.Count - 1 do TempTable.Fields.Add(FieldList.Items[I]);
TempTable.CreateTable(true, [tpTable, tpTriggers, tpIndexes]);
However, the data types are different and I don't have primary keys, notnull conditions, or 'dflt_value' which I got when I manually exported these same tables using an application called Exportizer (http://www.vlsoftware.net/exportizer/), which, though it has a command-line client, I'm not sure I'll be able to bundle with my application.
What is a reasonable way of copying a table from a paradox .dbf to a SQLite while saving as much of the datatypes and parameters as possible?
Use TFDBatchMove. SQLite is typeless, but FireDAC has its own pseudo data type mapping with which you might be able to preserve a lot from original data types. And if the data types won't be exactly by your will, you can define your custom Mappings.

SQL scripts act on master database instead of practice database

I wrote some sql scripts to create a database and store data. I just noticed that the new tables and data are going to the master database.
I found that I can address the correct database if I scope out the database as so:
CREATE TABLE Practice1.dbo.Experiments
(
ID int IDENTITY (100,1) PRIMARY KEY,
CompanyName nvarchar (50)
)
but I'd rather not have to scope out each command. Is there a way to set the database in the script so I don't have to scope everything out?
INSERT INTO Practice1.dbo.EXPERIMENTS
VALUES
(
'hello world'
)
SELECT * FROM Practice1.dbo.EXPERIMENTS
You have a drop down list on your toolbar that allows you to select what database you want the script to execute on. Also, you can state the database to use at the top of your script.
Example:
Syntax
USE {database}
http://doc.ddart.net/mssql/sql70/ua-uz_7.htm
On SQL Server 2005, to switch the database context, use the command:
USE DatabaseName
in the samples above, the database name is Practice1, hence:
USE Practice1

Resources