I want to create a temp table in my trigger to store some of the data. I did some research online and most people suggest that I create it by using the following query:
create #tempTableName (x datatype, y datatype..);
or
select * into #tempTableName from ...;
However, when I did it myself in oracle, it does not work and it seems to suggest me that I can name a table start with "#". What should I do in such circumstance? Also, what is the difference between a PL/SQL table and a temp table? Thanks.
Related
I have created one table 'Temp1'. with fields "id,pName,pid" etc.
but i want to Replace this table name with 'temp2' and fields name aslo with "no,name,rollno" without any data loss.
and also add one extra column compName in new created table Temp2.
can any one help me how i can achieve this.
Plz Help me.
Thanx in advance.
CREATE TABLE temp2 (no, name, rollno);
INSERT INTO temp2 SELECT id, pname, pid FROM temp1;
I assumed no datatypes or constraints on columns, so you need to adjust if you want some constraints.
Now you can verify that you have all the data in new table. Then, if you no longer need Temp1, you can drop it:
DROP TABLE temp1;
and if you want to shrink database (remove unused parts of the database file):
VACUUM;
I'd like to use flyway for a DB update with the situation that an DB already exists with productive data in it. The problem I'm looking at now (and I did not find a nice solution yet), is the following:
There is an existing DB table with numeric IDs, e.g.
create table objects ( obj_id number, ...)
There is a sequence "obj_seq" to allocate new obj_ids
During my DB migration I need to introduce a few new objects, hence I need new
object IDs. However I do not know at development time, what ID
numbers these will be
There is a DB trigger which later references these IDs. To improve performance I'd like to avoid determine the actual IDs every time the trigger runs but rather put the IDs directly into the trigger
Example (very simplified) of what I have in mind:
insert into objects (obj_id, ...) values (obj_seq.nextval, ...)
select obj_seq.currval from dual
-> store this in variable "newID"
create trigger on some_other_table
when new.id = newID
...
Now, is it possible to dynamically determine/use such variables? I have seen the flyway placeholders but my understanding is that I cannot set them dynamically as in the example above.
I could use a Java-based migration script and do whatever string magic I like - so, that would be a way of doing it, but maybe there is a more elegant way using SQL?
Many thx!!
tge
If the table you are updating contains only reference data, get rid of the sequence and assign the IDs manually.
If it contains a mix of reference and user data, you need to select the id based on values in other columns.
I have a table in a MS Access 2010 Database and it can easily be split up into multiple tables. However I don't know how to do that and still keep all the data linked together. Does anyone know an easy way to do this?
I ended up just writing a bunch of Update and Append queries to create smaller tables and keep all the data synced.
You must migrate to other database system, like MSSQL, mySQL. You can't do in MsAccess replication...
Not sure what do you mean by split up into multiple tables.
Are the two tables have same structure? you want to divide the table into two pats ... means if original table has fields A,B,C,D ... then you want to split it to Table1: A,B and
Table2: C,D.
Anyways, I googled it a bit and the below links might of what you are looking for. Check them.
Split a table into related tables (MDB)
How hard is it to split a table in Access into two smaller tables?
Where do you run into trouble with the table analyzer wizard? Maybe you can work around the issue you are running into.
However, if the table analyzer wizard isn't working out, you might also consider the tactics described in http://office.microsoft.com/en-us/access-help/resolve-and-help-prevent-duplicate-data-HA010341696.aspx.
Under Microsoft Access 2012, Database Tools, Analyze table.. I use the wizard to split a large table into multiple normalized tables. Hope that helps.
Hmmm, can't you just make a copy of the table, then delete opposite items in each table leaving the data the way you want except, make sure that both tables have the same exact auto number field, and use that field to reference the other.
It may not be the most proficient way to do it, but I solved a similar issue the following way:
a) Procedure that creates a new table via SQL:
CREATE TABLE t002 (ID002 INTEGER PRIMARY KEY, CONSTRAINT SomeName FOREIGN KEY (ID002) REFERENCES t001(ID001));
The two tables are related to each other through the foreign key.
b) Procedure that adds the neccessary fields to the new table (t002). In the following sample code let's use just one field, and let's call it [MyFieldName].
c) Procedure to append all values of field ID001 from Table t001 to field ID002 in Table t002, via SQL:
INSERT INTO ID002 (t002) SELECT t001.ID001 FROM t001;
d) Procedure to transfer values from fields in t001 to fields in t001, via SQL:
UPDATE t001 INNER JOIN t002 ON t001.ID001 = t002.ID002 SET t002.MyFieldName = t001.MyFieldName;
e) Procedure to remove (drop) the fields in question in Table t001, via SQL:
ALTER TABLE t001 DROP COLUMN MyFieldName;
f) Procedure that calls them all one after the other. Fieldnames are fed into the process as parameters in the call to Procedure f.
It is quite a bunch of coding, but it did the job for me.
Just some background, sorry so long winded.
I'm using the System.Data.SQLite ADO.net adapter to create a local sqlite database and this will be the only process hitting the database, so I don't need to worry about concurrency.
I'm building the database from various sources and don't want to build this all in memory using datasets or dataadapters or anything like that. I want to do this using SQL (DdCommands). I'm not very good with SQL and complete noob in sqlite. I'm basically using sqlite as a local database / save file structure.
The database has a lot of related tables and the data has nothing to do with People or Regions or Districts, but to use a simple analogy, imagine:
Region table with auto increment RegionID, RegionName column and various optional columns.
District table with auto increment DistrictID, DistrictName, RegionId, and various optional columns
Person table with auto increment PersonID, PersonName, DistrictID, and various optional columns
So I get some data representing RegionName, DistrictName,PersonName, and other Person related data. The Region, District and/or Person may or may not be created at this point.
Once again, not being the greatest with this, my thoughts would be something like:
Check to see if Region exists and if so get the RegionID
else create it and get RegionID
Check to see if District exists and if so get the DistrictID
else create it adding in RegionID from above and get DistrictID
Check to see if Person exists and if so get the PersonID
else create it adding in DistrictID from above and get PersonID
Update Person with rest of data.
In MS SQL Server I would create a stored procedure to handle all this.
Only way I can see to do this with sqlite is a lot of commands. So I'm sure I'm not getting this. I've spent hours looking around on various sites but just don't feel like I'm going down the right road. Any suggestions would be greatly appreciated.
Use last_insert_rowid() in conjunction with INSERT OR REPLACE. Something like:
INSERT OR REPLACE INTO Region (RegionName)
VALUES (:Region );
INSERT OR REPLACE INTO District(DistrictName, RegionID )
VALUES (:District , last_insert_rowid());
INSERT OR REPLACE INTO Person(PersonName, DistrictID )
VALUES (:Person , last_insert_rowid());
Here's the situation. Due to the design of the database I have to work with, I need to write a stored procedure in such a way that I can pass in the name of the table to be queried against if at all possible. The program in question does its processing by jobs, and each job gets its own table created in the database, IE table-jobid1, table-jobid2, table-jobid3, etc. Unfortunately, there's nothing I can do about this design - I'm stuck with it.
However, now, I need to do data mining against these individualized tables. I'd like to avoid doing the SQL in the code files at all costs if possible. Ideally, I'd like to have a stored procedure similar to:
SELECT *
FROM #TableName AS tbl
WHERE #Filter
Is this even possible in SQL Server 2005? Any help or suggestions would be greatly appreciated. Alternate ways to keep the SQL out of the code behind would be welcome too, if this isn't possible.
Thanks for your time.
best solution I can think of is to build your sql in the stored proc such as:
#query = 'SELECT * FROM ' + #TableName + ' as tbl WHERE ' + #Filter
exec(#query)
not an ideal solution probably, but it works.
The best answer I can think of is to build a view that unions all the tables together, with an id column in the view telling you where the data in the view came from. Then you can simply pass that id into a stored proc which will go against the view. This is assuming that the tables you are looking at all have identical schema.
example:
create view test1 as
select * , 'tbl1' as src
from job-1
union all
select * , 'tbl2' as src
from job-2
union all
select * , 'tbl3' as src
from job-3
Now you can select * from test1 where src = 'tbl3' and you will only get records from the table job-3
This would be a meaningless stored proc. Select from some table using some parameters? You are basically defining the entire query again in whatever you are using to call this proc, so you may as well generate the sql yourself.
the only reason I would do a dynamic sql writing proc is if you want to do something that you can change without redeploying your codebase.
But, in this case, you are just SELECT *'ing. You can't define the columns, where clause, or order by differently since you are trying to use it for multiple tables, so there is no meaningful change you could make to it.
In short: it's not even worth doing. Just slop down your table specific sprocs or write your sql in strings (but make sure it's parameterized) in your code.