Is there a way to query the creation date of a table in SQLite?
I am new to SQL, overall. I just found this SQL Server table creation date query.
I am assuming that sqlite_master is the equivalent to sys.tables in SQLite. Is that correct?
But then my sqlite_master table only has the columns "type", "name", "tbl_name", "rootpage" and "sql".
If this is not possible in SQLite, what would be the best way to implement this functionality by myself?
SQLite does not store this data itself. Like you said, the sqlite_master table doesn't have any relevant column.
There's no particularly nice way that I can come up with to implement it. You could create some sort of interface for creating tables, and have it note the time whenever you create a new one, but anything created through a different method won't go through the same process. It also looks like there's no way to set a trigger on CREATE TABLE, so that's not an option either.
Why do you want this functionality? Creating tables seems like something you wouldn't be doing very often, maybe there's a better way to approach the problem?
Related
My background is in data science with R, but in my current position I'm pulling data through Rails and ActiveRecord. I want to perform transformations to my data and create new columns and save it in a temporary way that allows me to continue querying it like a regular table, but without actually making changes to the database.
In R, this might look something like:
new_table <- old_table[old_table$date >= '2020-01-01']
new_table$average <- mean(new_table$value)
I would take this new_table and perform any number of queries I could have done to the old_table, and once I close my app I expect this temporary table to be removed as well.
This particular transformation is simple and wouldn't require a new table, but for example, there are a number of tables I'd like to join with my new_table. It would be easier if I could perform my transformations once and then join it, rather than joining the old_table and performing the transformation each time.
Since your question is vague I'll give a general answer that might not fit your use but it's a best guess at this point. There are numerous ways to use the DB connection in Rails to query directly, as referenced in the link in my comments above. But as an experiment I wanted to see if this would work and it does, at least with a project that is using Postgres. I wanted it to be DB agnostic so I'm avoiding calling the DB connection directly...
First create a temporary class in the Rails console:
rails c
Loading development environment (Rails...
class MyTempTable < ActiveRecord::Base
end
=> nil
EDIT:
In addition to the method below, you can also do this to create the table:
MyTempTable.find_by_sql('create temp table temp_tables AS select...')
This will create the temp table directly from a query. You could then use a join statement if you wanted data from more than one table in the new temp table, and you can add any additional columns you want
End Edit
Now you have a class that will act like a table with the usual ActiveRecord methods. Rails now assumes there is a table in the DB called my_temp_tables (must be plural). You can then create a temp table (if your DBMS supports temp tables) like this:
MyTempTable.find_by_sql('create temp table my_temp_tables(col1, col2... ')
Now you have a temp table with the columns you want. You can then do SQL operations using
MyTempTable.find_by_sql('INSERT INTO my_temp_tables SELECT * FROM ....')
You can then treat MyTempTable like any other model in Rails. If you wanted all the columns from one table joined with some columns from another table you can create the temp table as above, you just have to create all the columns first (at least in Postgres, in MSSQL you can probably create the temp table inserting directly from a select => join statement). If you are new to Rails you can grab column names by doing this on existing tables:
some_columns = SomeTable.column_names
=> ["id", "name", "serial", "purchased", ...]
Now you have an array of the column names so you don't have to type all of them. You can list out the columns you want from the various tables, cut and past them into the create temp table... statement, then INSERT the joined data into MyTempTable
If you do much of this regularly you'll probably want to keep a listing of all your column names in an text file. You can also create Rake tasks that do all of this and save the data to some format, or send it off to where ever it is supposed to go. That way you can have it all in a file that you can just run and it will create the temp tables, do the work, and then when it closes out the temporary classes and tables go away.
You might want to investigate some Ruby Gems, there are probably existing gems that do some of what you want. But as a proof of concept this works. You could also spin up a local Rails app and use scripting to import the data you want into tables, then just flush and recreate it at will.
Any Rails gurus that know of a better way, please add an answer or edit this one. This is mostly a thought experiment for me since I wanted to see if it was possible.
If you want to create views that you can access later on you could use a gem like https://github.com/scenic-views/scenic
Or something like this might be of interest: https://github.com/igorkasyanchuk/rails_db
Sounds like you're keen on the benefits of having some structure and tools available to work on the data, but don't want the data persisted in a db table.
Maybe use a model without a table like this.
I have a Customers table which contains the salesRepEmployeeNumber which is in the Employees table.
How do I do something like
SELECT *
FROM Customers
JOIN Employees
ON Customers.salesRepEmployeeNumber = Employees.employeeNumber
with icCube ETL ?
As pointed in another answer, you can add a table based in an SQL statement that would do the job. In case your original datasource is not able to do a join :
We've not yet an join transformation, added this in our todo list. On the meantime, what you can do is.
Create an Union Table with your two tables. This will create a new table with the columns of both tables. Put the small one, first as we're going to cache it later on.
Create a Javascript view, you might need to activate Javascript in your icCube.xml configuration. In this one you can cache the first table and use a bit of js to do the join. You can trigger the table change on a field being empty. Don't forget to put 'Table Row Ordering' to Keep Table Order.
hope it helps
No need to use the ETL.
With the designer, add a table with the + sign in the menu above DataSource. The next panel gives you the choice between reading data from an existing table or an sql query.
I am using an Oracle data pump to do a schema "rename." There is a primary key column on all (2000) tables. For example, I need to run this on all tables:
update mytable set mykey='foo2' where mykey='foo';
I would use the remap_data option of expdp to do this. The problem is that there are some columns that I would need to do the rename on 10+ columns. Has anyone had a problem like this and found a way to handle this?
Previously, I had tried using "Create Table As." The problem would be having to recreate the schema structure for all of the tables (views/triggers/grants/indexes/constraints). I am aware of the DBMS_METADATA.GET_DDL package. Offhand, doing a diff of the database schema before and after and recreating the diffs seems ugly.
I have also tried doing inserts on the table without any constraints or indexes, so I would only have to re-enable constraints and recreate the indexes, but I would like to try something faster.
I am using Oracle 11.2.0.3.0.
If i understand correctly, your real problem (or goal) is to 'RENAME' a schema.
You chose to export / import (using a different NAME to achieve RENAME) using oracle data pump.
Then DROP old schema (if you feel redundant).
If this is correct, here are the steps, you can do to achieve your goal. I did it successfully on my DEV env. All objects (including PK, FKs) were imported successfully.
-- Export RMCORE_QA
expdp DIRECTORY=DMPDIR DUMPFILE=RMCORE_QA.dmp SCHEMAS='RMCORE_QA' LOG=RMCORE_QA_EXP_DP.lst
-- Import using RMCORE_QA3
impdp DIRECTORY=DMPDIR DUMPFILE=RMCORE_QA.dmp REMAP_SCHEMA='RMCORE_QA:RMCORE_QA3' SCHEMAS='RMCORE_QA' LOG=RMCORE_QA_IMP_DP.lst TRANSFORM=OID:N
You can also compare objects b/w schemas by-
SELECT OBJECT_NAME, STATUS, object_type FROM dba_objects WHERE owner LIKE 'RMCORE_QA'
MINUS
select OBJECT_NAME, STATUS, object_type from dba_objects where owner like 'RMCORE_QA3';
HTH. Let me know if i did not get your problem...
Are there any commands that make life easy with respect to this? I want to take the column schema of one datatable (.net datatable) and copy it to another new datatable.
I've seen something like:
SELECT * INTO [DestinationTable] FROM [SourceTable] WHERE (1=2);
Used in SqlServer.
I think this assumes that DestinationTable doesn't exist. It then creates the table and copies the schema from SourceTable the WHERE clause prevents any actual data from being copied.
I'm not really a database developer, so there's probably a much better way to do this.
I've found the answer. It is the .Clone method.
Here's the situation. Due to the design of the database I have to work with, I need to write a stored procedure in such a way that I can pass in the name of the table to be queried against if at all possible. The program in question does its processing by jobs, and each job gets its own table created in the database, IE table-jobid1, table-jobid2, table-jobid3, etc. Unfortunately, there's nothing I can do about this design - I'm stuck with it.
However, now, I need to do data mining against these individualized tables. I'd like to avoid doing the SQL in the code files at all costs if possible. Ideally, I'd like to have a stored procedure similar to:
SELECT *
FROM #TableName AS tbl
WHERE #Filter
Is this even possible in SQL Server 2005? Any help or suggestions would be greatly appreciated. Alternate ways to keep the SQL out of the code behind would be welcome too, if this isn't possible.
Thanks for your time.
best solution I can think of is to build your sql in the stored proc such as:
#query = 'SELECT * FROM ' + #TableName + ' as tbl WHERE ' + #Filter
exec(#query)
not an ideal solution probably, but it works.
The best answer I can think of is to build a view that unions all the tables together, with an id column in the view telling you where the data in the view came from. Then you can simply pass that id into a stored proc which will go against the view. This is assuming that the tables you are looking at all have identical schema.
example:
create view test1 as
select * , 'tbl1' as src
from job-1
union all
select * , 'tbl2' as src
from job-2
union all
select * , 'tbl3' as src
from job-3
Now you can select * from test1 where src = 'tbl3' and you will only get records from the table job-3
This would be a meaningless stored proc. Select from some table using some parameters? You are basically defining the entire query again in whatever you are using to call this proc, so you may as well generate the sql yourself.
the only reason I would do a dynamic sql writing proc is if you want to do something that you can change without redeploying your codebase.
But, in this case, you are just SELECT *'ing. You can't define the columns, where clause, or order by differently since you are trying to use it for multiple tables, so there is no meaningful change you could make to it.
In short: it's not even worth doing. Just slop down your table specific sprocs or write your sql in strings (but make sure it's parameterized) in your code.