Batch filling in lookups in MS Access - ms-access-2010

I'm copying quite a lot of data from Excel into Access. The trouble is, I have a lookup field and I must select a value from it for each new row. There are about 1000 rows and I wondered if I can somehow fill in those lookups automatically.

Create a linked table to the Excel. Let's call it ExcelLinked.
Create a query with ExcelLinked and your lookup table, let's call it tblLookupItems.
The query will be:
INSERT INTO TargetTable (SELECT ExcelLinked.*, tblLookupItems.ID FROM ExcelLinked INNER JOIN tblLookupItems ON ExcelLinked.LookupItem = tblLookupItems.LookupItem)
However, if there are values in the Excel file that do not exist in the lookup table, you will have to decide whether you are willing to forgo those rows or use a lookup ID of 0 in which cae you should use a LEFT JOIN in the SQL query.

Related

Multi row and column update using R in postgres database

Problem: I want to update values (in multiple rows and columns) of a table in a postgres database using R.
I know that the sql update statement can be something like this but I assume looping over a set of such queries is inefficient:
UPDATE table
SET col1 = value1, col2 = value2, ...
WHERE col1 = "some-value"
Question: Is there a function available to only update particular rows (and potentially only a subset of the columns) of the table (similar to dbWriteTable)? If not, can you think of an efficient way/sql query of updating multiple rows in postgres and how to hand over the R object to the sql query?
EDIT: Assuming I have a foreign key and I don't want to turn on the ON DELETE CASCADE option, how could I efficiently update values in multi rows and columns for the parent table (I only want to update parent table, not child table)?
First, I would read in the data that needs to be updated from SQL to R. Second, I would delete the data to be updated in SQL. Third, I would append the updated data from R to SQL.

In Teradata there get columns/fields used by join and where condition and respective table without parsing query

I am trying to automate some performance check on query in Teradata.
So as part of that I want to check if columns used in joining condition are primary index of respective table or not and similarly for columns used in where condition are partition column in respective table or not. Is there any direct Teradata query which can directly give this without parsing whole query.
Yes there are two dbc objects where you can query :
dbc.columnsv
dbc.indicesv.
Primary index information will be stored in the 2nd view just search with your tablename and database name.
Partitioned information is stored in columnsv , there is a column with a flag value 'Y' for partitioned columns.
Example :
SELECT DATABASENAME,TABLENAME,COLUMNNAME FROM DBC.COLUMNSV WHERE PARTITIONINGCOLUMN='Y' where tablename=<> and databasename=<>;
Select * from dbc.indicesv where tablename=<> and databasename=<>;

Create temporary table

I'm coming from SQL Server enviroment where you can declare a temp table with #table, but as I've read you can't do this in oracle.
I want get a value for 500.000 hardcoded id's from a table, but as the IN clause has a limit of 1000 I need to find another way. Is the best way to create a temporary table and insert the hardcoded values and then join the other table which contains the values I need ?
My client (toad) has autocommit set to off and I dont want to commit anything, I want it to be session-based so when I close the database client I want the temporary table do disappear. Is the code below the right way to do in oracle?
CREATE GLOBAL TEMPORARY TABLE Test(HardcodedId number(10))
ON COMMIT DELETE ROWS;
I've also tried to use inner join and in the join select the hardcoded values from dual, but this creates a column for each value and i'm not able to use a reference to join with. Is it possible to insert all values into a single column in dual?
You can use some thing like this (500 union all)
select * from (
select '1' from dual
union all
select '2' from dual
...) q
Then you can join this with other tables.
For your situation, I would use a GTT (global temporary table) - which you have already researched by the looks.
The advantage of a GTT is that it's a permanent object (so no need to constantly create and drop it) and the data "stored" in it is on a session basis.

sqlite3 - the philosophy behind sqlite design for this scenario

suppose we have a file with just one table named TableA and this table has just one column named Text;
let say we populate our TableA with 3,000,000 of strings like these(each line a record):
Many of our patients are incontinent.
Many of our patients are severely disturbed.
Many of our patients need help with dressing.
if I save the file at this level it'll be: ~326 MB
now let say we want to increase the speed of our queries and therefore we set our Text column as the PrimaryKey(or create index on it);
if I save the file at this level it'll be: ~700 MB
our query:
SELECT Text FROM "TableA" where Text like '% home %'
for the table without index: ~5.545s
for the indexed table: ~2.231s
As far as I know when we create index on a column or set a column to be our PrimaryKey then sqlite engine doesn't need to refer to table itself(if no other column was requested in query) and it uses the index for query and hence the speed of query execution increases;
My question is in the scenario above which we have just one column and set that column to be the PrimaryKey too, then why sqlite holds some kind of unnecessary data?(at least it seems unnecessary!)(in this case ~326 MB) why not just keeping the index\PrimaryKey data?
In SQLite, table rows are stored in the order of the internal rowid column.
Therefore, indexes must be stored separately.
In SQLite 3.8.2 or later, you can create a WITHOUT ROWID table which is stored in order of its primary key values.

Query fetching results

In drupal website we are getting sql query data exporting very slow (taking longtime) issue how to solve the issue
The query as follows
SELECT DISTINCT(a.*), c.nid, b.uac_inst_campus_cricos
FROM uac_export_coursetable_latest AS a
LEFT JOIN uac_institutiondata AS c
ON c.uac_institutiondata_institution = a.uac_course_institution
LEFT JOIN uac_inst_campus_latest AS b
ON b.nid = c.nid AND b.uac_inst_furtherinfobox_heading = a.campusname
WHERE a.uac_course_institution = '6628'
AND intyear12 = 'Yes'
ORDER BY uaccoursecode
Because we don't know the exact schema of your custom tables, we can't give you an exact solution but in general when query execution is slow, you need to verify the columns you are using for the JOINS and within the WHERE clausule.
Be sure that you are joining on foreign key columns
Be sure that indexes are set on the columns used within conditions
In your case, I would add index on following columns: uac_institutiondata_institution (uac_institutiondata table), intyear12 (uac_export_coursetable_latest), nid (uac_inst_campus_latest table)
If the uac_course_institution column in uac_export_coursetable_latest table is not a primary key, also on a index on this column.
More info about indexes on a MySql database: http://dev.mysql.com/doc/refman/5.0/en/create-index.html

Resources