I'm working on a wiki program and using SQLite as the database. I want to create a many-to-many relationship between wiki pages and tags describing those pages. I'm using clojure.java.jdbc to handle the database operations. I would like to enforce foreign key constraints in the page-to-tags cross-reference table. I looked at the information about foreign keys on the SQLite site (https://www.sqlite.org/foreignkeys.html) and believe something like this is what I want;
(def the-db-name "the.db")
(def the-db {:classname "org.sqlite.JDBC"
:subprotocol "sqlite"
:subname the-db-name})
(defn create-some-tables
"Create some tables and a cross-reference table with foreign key constraints."
[]
(try (jdbc/db-do-commands
the-db false
["PRAGMA foreign_keys = ON;"
(jdbc/create-table-ddl :pages
[[:page_id :integer :primary :key]
;...
[:page_content :text]])
(jdbc/create-table-ddl :tags
[[:tag_id :integer :primary :key]
[:tag_name :text "NOT NULL"]])
(jdbc/create-table-ddl :tags_x_pages
[[:x_ref_id :integer :primary :key]
[:tag_id :integer]
[:page_id :integer]
["FOREIGN KEY(tag_id) REFERENCES tags(tag_id)"]
["FOREIGN KEY(page_id) REFERENCES pages(page_id)"]])])
(catch Exception e (println e))))
But attempting to turn the pragma on has no effect.
Just trying to turn the pragma on and check for effect:
(println "Check before:" (jdbc/query the-db ["PRAGMA foreign_keys;"]))
; Transactions on or off makes no difference.
(println "Result of execute!:" (jdbc/execute! the-db
["PRAGMA foreign_keys = ON;"]))
(println "Check after:" (jdbc/query the-db ["PRAGMA foreign_keys;"]))
;=> Check before: ({:foreign_keys 0})
;=> Result of execute!: [0]
;=> Check after: ({:foreign_keys 0})
The results indicate that the library (org.xerial/sqlite-jdbc "3.21.0.1") was compiled to support foreign keys since there were no errors, but trying to set the pragma has no effect.
I found this in the JIRA for the clojure JDBC back in 2012. The described changes have been implemented since then, but the code still has no effect.
Finally found this answer to a Stackoverflow question that pointed to this post back in 2011. That allowed me to cobble together something that did seem to set the pragma. The code below depends on creating a specially configured Connection.
(ns example
(:require [clojure.java.jdbc :as jdbc])
(:import (java.sql Connection DriverManager)
(org.sqlite SQLiteConfig)))
(def the-db-name "the.db")
(def the-db {:classname "org.sqlite.JDBC"
:subprotocol "sqlite"
:subname the-db-name})
(defn ^Connection get-connection
"Return a connection to a SQLite database that
enforces foreign key constraints."
[db]
(Class/forName (:classname db))
(let [config (SQLiteConfig.)]
(.enforceForeignKeys config true)
(let [connection (DriverManager/getConnection
(str "jdbc:sqlite:" (:subname db))
(.toProperties config))]
connection)))
(defn exec-foreign-keys-pragma-statement
[db]
(let [con ^Connection (get-connection db)
statement (.createStatement con)]
(println "exec-foreign-keys-pragma-statement:"
(.execute statement "PRAGMA foreign_keys;"))))
Based on the above, I rewrote the table creation code above as:
(defn create-some-tables
"Create some tables and a cross-reference table with foreign key constraints."
[]
(when-let [conn (get-connection the-db)]
(try
(jdbc/with-db-connection
[conn the-db]
; Creating the tables with the foreign key constraints works.
(try (jdbc/db-do-commands
the-db false
[(jdbc/create-table-ddl :pages
[[:page_id :integer :primary :key]
[:page_content :text]])
(jdbc/create-table-ddl :tags
[[:tag_id :integer :primary :key]
[:tag_name :text "NOT NULL"]])
(jdbc/create-table-ddl :tags_x_pages
[[:x_ref_id :integer :primary :key]
[:tag_id :integer]
[:page_id :integer]
["FOREIGN KEY(tag_id) REFERENCES tags(tag_id)"]
["FOREIGN KEY(page_id) REFERENCES pages(page_id)"]])])
; This still doesn't work.
(println "After table creation:"
(jdbc/query the-db "PRAGMA foreign_keys;"))
(catch Exception e (println e))))
; This returns the expected results.
(when-let [statement (.createStatement conn)]
(try
(println "After creating some tables: PRAGMA foreign_keys =>"
(.execute statement "PRAGMA foreign_keys;"))
(catch Exception e (println e))
(finally (when statement
(.close statement)))))
(catch Exception e (println e))
(finally (when conn
(.close conn))))))
The tables are created as expected. Some of the clojure.java.jdbc functions still don't seem to work as desired though. (See the jdbc/query call in the middle of the listing.) Getting things to always work as expected seems very "manual" having to fall back on java interop. And it seems like every interaction with the database requires using the specially configured Connection returned by the get-connection function.
Is there a better way to enforce foreign key constraints in SQLite in Clojure?
With the advent of next.jdbc you can now do that like so:
(ns dev
(:require [next.jdbc :as jdbc]
[next.jdbc.sql :as sql]))
(with-open [conn (jdbc/get-connection {:dbtype "sqlite" :dbname "test.db"})]
(println (sql/query conn ["PRAGMA foreign_keys"]))
(jdbc/execute! conn ["PRAGMA foreign_keys = ON"])
; jdbc/execute whatever you like here...
(println (sql/query conn ["PRAGMA foreign_keys"])))
This outputs
[{:foreign_keys 0}]
[{:foreign_keys 1}]
I've not played with SqlLite, but would recommend you test with either
H2: Pure java, can run in memory for tests (http://www.h2database.com)
Postgres: Needs install, but is the gold standard for SQL compliance (https://www.postgresql.org)
Also, when debugging it may be easier to use pure SQL strings (see http://clojure-doc.org/articles/ecosystem/java_jdbc/using_sql.html):
(j/execute! db-spec
["update fruit set cost = ( 2 * grade ) where grade > ?" 50.0])
Using pure SQL strings (especially when debugging) can avoid many misunderstandings/pitfalls with JDBC. Also, keep in mind that you may discover a bug in either the Clojure JDBC libs or the DB itself.
I'm not sure SQLite does support the features you described above. If you really want to keep your data being consisted with strict constraints, use PostgeSQL database. I know that working with SQLite seems easier especially when you've just started the project, but believe me, using Postgres really worth it.
Here is an example of post and tags declaration using Postgres that takes lots of details into account:
create table post(
id serial primary key,
title text not null,
body text not null
);
create table tags(
id serial primary key,
text text not null unique
);
create table post_tags(
id serial primary key,
post_id integer not null references posts(id),
tag_id integer not null references tags(id),
unique(post_id, tag_id)
);
Here, the tags table cannot contain two equal tags. That's important to keep only unique tag strings to prevent the table from growing.
The bridge table that links a post with tags has a special constraint to prevent a case when a specific tag is linked to a post several times. Say, if a post has "python" and "clojure" tags attached, you won't be able to add "python" one more time.
Finally, each reference clause when declaring a table creates a special constraint that prevents you from referencing an id that does not exist in a target table.
Installing Postgres and setting it up might be a bit difficult, but nowadays there are such one-click applications like Postgres App that are quite easy to use even if you are not familiar with them.
Related
I get the SQLite error message "FOREIGN KEY constraint failed". That's the complete error information (besides a part of the SQL query) and it's not helpful. (In fact it's just as good (or bad) as Oracle error messages.) I need to know the name of the constraint to investigate the issue in my program. Unfortunately there's no web support platform to discuss this with an SQLite community. Does somebody know how to get more information about the error out of that SQLite library?
I'm specifically using the System.Data.SQLite library for .NET but the error message comes directly from the core and there are no additional exception properties that could help me.
Due to the way in which deferred FK constraints are implemented in SQLite, this information is not available when the error is raised.
You could reimplement the FK checks as triggers.
Alternatively, log the values in the failed command, and look up the data by hand.
In a Django project, here below is what I did to replace FOREIGN KEY constraint failed with (for example) DETAIL: Key (base_id)=(1a389dc3-5bc1-4132-8a4a-c8200533503a) is not present in table "backend_base" ...
The solution is Django specific since it's based on Django ORM's methods; obj is supposed to be the Model instance which caused the exception upon model.save()
from django.core.exceptions import ObjectDoesNotExist
def explain_integrity_errors(obj):
"""
Replace 'FOREIGN KEY constraint failed' error message provided by sqlite
with something useful (i.e. the exception thrown by PostgreSQL):
'DETAIL: Key (base_id)=(1a389dc3-5bc1-4132-8a4a-c8200533503a) is not present in table "backend_base"'
"""
error_msg = ''
errors = []
# Scan all FKs
relation_fields = [f for f in obj._meta.concrete_fields if f.is_relation]
for field in relation_fields:
try:
# try to access the related target
fk_target = getattr(obj, field.name)
except ObjectDoesNotExist as e:
# Log offending FK
fk_field = field.name + '_id'
fk_value = getattr(obj, fk_field)
fk_table = field.related_model._meta.db_table
errors.append('Key (%s)=(%s) is not present in table "%s"' % (
fk_field,
str(fk_value),
fk_table
))
if len(errors):
error_msg = ' DETAIL: ' + "; ".join(errors)
return error_msg
I use airflow python operators to execute sql queries against a redshift/postgres database. In order to debug, I'd like the DAG to return the results of the sql execution, similar to what you would see if executing locally in a console:
I'm using psycop2 to create a connection/cursor and execute the sql. Having this logged would be extremely helpful to confirm the parsed parameterized sql, and confirm that data was actually inserted (I have painfully experiences issues where differences in environments caused unexpected behavior)
I do not have deep knowledge of airflow or the low level workings of the python DBAPI, but the pscyopg2 documentation does seem to refer to some methods and connection configurations that may allow this.
I find it very perplexing that this is difficult to do, as I'd imagine it would be a primary use case of running ETLs on this platform. I've heard suggestions to simply create additional tasks that query the table before and after, but this seems clunky and ineffective.
Could anyone please explain how this may be possible, and if not, explain why? Alternate methods of achieving similar results welcome. Thanks!
So far I have tried the connection.status_message() method, but it only seems to return the first line of the sql and not the results. I have also attempted to create a logging cursor, which produces the sql, but not the console results
import logging
import psycopg2 as pg
from psycopg2.extras import LoggingConnection
conn = pg.connect(
connection_factory=LoggingConnection,
...
)
conn.autocommit = True
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)
logger.addHandler(logging.StreamHandler(sys.stdout))
conn.initialize(logger)
cur = conn.cursor()
sql = """
INSERT INTO mytable (
SELECT *
FROM other_table
);
"""
cur.execute(sql)
I'd like the logger to return something like:
sql> INSERT INTO mytable (
SELECT ...
[2019-07-25 23:00:54] 912 rows affected in 4 s 442 ms
Let's assume you are writing an operator that uses postgres hook to do something in sql.
Anything printed inside an operator is logged.
So, if you want to log the statement, just print the statement in your operator.
print(sql)
If you want to log the result, fetch the result and print the result.
E.g.
result = cur.fetchall()
for row in result:
print(row)
Alternatively you can use self.log.info in place of print, where self refers to the operator instance.
Ok, so after some trial and error I've found a method that works for my setup and objective. To recap, my goal is to run ETL's via python scripts, orchestrated in Airflow. Referring to the documentation for statusmessage:
Read-only attribute containing the message returned by the last command:
The key is to manage logging in context with transactions executed on the server. In order for me to do this, I had to specifically set con.autocommit = False, and wrap SQL blocks with BEGIN TRANSACTION; and END TRANSACTION;. If you insert cur.statusmessage directly following a statement that deletes or inserts, you will get a response such as 'INSERT 0 92380'.
This still isn't as verbose as I would prefer, but it is a much better than nothing, and is very useful for troubleshooting ETL issues within Airflow logs.
Side notes:
- When autocommit is set to False, you must explicitly commit transactions.
- It may not be necessary to state transaction begin/end in your SQL. It may depend on your DB version.
con = psy.connect(...)
con.autocommit = False
cur = con.cursor()
try:
cur.execute([some_sql])
logging.info(f"Cursor statusmessage: {cur.statusmessage})
except:
con.rollback()
finally:
con.close()
There is some buried functionality within psycopg2 that I'm sure can be utilized, but the documentation is pretty thin and there are no clear examples. If anyone has suggestions on how to utilize things such as logobjects, or returning join PID to somehow retrieve additional information.
I try to set journal-mode of Sqlite3 in my Ruby-on-Rails project.
It seems that ruby-on-rails uses default journal-mode of sqlite, i.e. 'delete', as i saw a journal file in folder 'db' when I updated database and it was deleted when update was done. I hope to set jouranl-mode to be "WAL" or "memory".
I tried Sqlite command line
PRAGMA main.journal_mode=WAL
but it does not affect application in ruby-on-rails.
Finally I made it by changing source code of sqlite3_adapter.rb
I changed a function in file : activerecord-5.1.4/lib/active_record/connection_adapters/sqlite3_adapter.rb
def configure_connection
# here are original codes
execute("PRAGMA journal_mode = WAL", "SCHEMA")
end
Because configure_connection is called by initialize of SQLite3Adapter
It does not sound nice solution although it works.
Is there any nicer way to set journal-mode of Sqlite3 in Ruby-on-Rails(version is 5.1.4)? For example configuration options
It has been a while since I've had to do this, but you should be able to use an initializer so you don't need to patch the source.
Putting something like this in config/initializers/configure_sqlite_journal.rb
if c = ::ActiveRecord::Base.connection
c.execute 'PRAGMA journal_mode = WAL'
end
Should do what you want
I found a better answer in here: Speed up your Rails sqlite database for large dataset, more performance and configure:
The code( I put this into config/initializers/speedup_sqlite3.rb ):
if ::ActiveRecord::Base.connection_config[:adapter] == 'sqlite3'
if c = ::ActiveRecord::Base.connection
# see http://www.sqlite.org/pragma.html for details
# Page size of the database. The page size must be a power of two between 512 and 65536 inclusive
c.execute 'PRAGMA main.page_size=4096;'
# Suggested maximum number of database disk pages that SQLite will hold in memory at once per open database file
c.execute 'PRAGMA main.cache_size=10000;'
# Database connection locking-mode. The locking-mode is either NORMAL or EXCLUSIVE
c.execute 'PRAGMA main.locking_mode=EXCLUSIVE;'
# Setting of the "synchronous" flag, "NORMAL" means sync less often but still more than none
c.execute 'PRAGMA main.synchronous=NORMAL;'
# Journal mode for database, WAL=write-ahead log
c.execute 'PRAGMA main.journal_mode=WAL;'
# Storage location for temporary tables, indices, views, triggers
c.execute 'PRAGMA main.temp_store = MEMORY;'
end
end
I am using jet for asynchronous ring adapter.
Jet also comes with async http-client which returns a channel whose value's :body is also a channel.
Also, async server route handler can return a map whose :body key can contain a channel. When this channel would be closed, the response would be returned to the client.
I am writing following go code :
(defn- api-call-1 []
(go (-> (jet-client/get "api-url-1")
<!
:body ;; jet http client :body is also a channel.
<!
api-call-1-response-parse)))
(defn- api-call-2 []
(go (-> (jet-client/get "api-url-2")
<!
:body
<!
api-call-2-response-parse)))
(defn route-function []
(let [response-chan (chan)]
(go
(let [api-call-1-chan (api-call-1) ;; using channel returned by go
api-call-2-chan (api-call-2)]
(-> {:api-1 (<! api-call-1-chan)
:api-2 (<! api-call-2-chan)}
encode-response
(>! response-chan)))
(close! response-chan))
;; for not blocking server thread, return channel in body
{:body response-chan :status 200}))
In my route-function, i can not block.
Though this code works fine, Is using go in api-call-1 is bad ?
I found that to use <! in api-call-1 i need to put it in a go block.
Now i use this go block's channel in route-function. Does this look unidomatic ? I am concerned about not exposing api-call-1-response-parse or even :body as channel to the route-function.
What is the right way to structure go block code and functions ?
Should i care about extra go blocks in functions api-call-1/2 ?
What you have looks much like the equivalent code I have in production. This is quite idiomatic, so I think your code is structured correctly.
The fact that core.async parking opperations can't cross function boundaries stems from the fact that it's written as a macro and needs to process the whole chunk of code at once (or at least while it's lexically available). This tends to make all core.async code come out in the pattern you are using.
I need to dump the complete schema (ddl only, no data) of an Oracle database to a text file or a set of text files in order to be able to systematically track revisions to the database schema using standard VCS tools like git.
Using my favorite RDBMS, postgresql, this is an almost trivially easy task, using pg_dump --schema-only.
However, dumping an Oracle DB schema to an SQL file has proved to be a maddeningly difficult task with Oracle 11g. I'm interested to know about approaches that others have figured out.
Data pump export (no ☹)
Unfortunately, I cannot use the data pump export tools introduced in Oracle 10g, because these require DBA-level access and I cannot easily obtain this level of access for most of my clients' databases.
SQL developer
I've used Oracle's SQL developer GUI and it mostly does what I want with the "Separate files" setting:
Emits a syntactically correct SQL file to create each database object
Emits a summary SQLs file which includes each of the individual-object files in the correct order
However there are several major issues with it:
It's a GUI only; no way to script this behavior from the command line as far as I can tell
Running as an unprivileged user, it can only emit the DDL for that user's owned objects (even when that user has been granted privileges to view other users' objects ... ##$(*&!!)
It's extremely slow, taking around 20 minutes to output about 1 MB of DDL
exp and imp
Oracle's older exp command-line tool does not require DBA access. It can export the complete DDL for a database (with DBA access), or just the DDL for an individual user's owned objects.
Unfortunately, it is even slower than SQL developer (takes >1 hour for the same database even with a few performance tweaks.
However, the worst thing about exp is that it does not emit SQL, but rather a proprietary binary-format dump file (e.g. expdat.dmp).
The corresponding imp tool can "translate" these dump files into severely mangled SQL which does not contain syntactically correct end-of-statement delimiters.
Example of the horrible mangled SQL that imp show=y emits; notice the crazy line wrapping and lack of semicolons at the end of some but not all statements.
Export file created by EXPORT:V11.02.00 via direct path
import done in US7ASCII character set and AL16UTF16 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
. importing FPSADMIN's objects into FPSADMIN
"BEGIN "
"sys.dbms_logrep_imp.instantiate_schema(schema_name=>SYS_CONTEXT('USERENV','"
"CURRENT_SCHEMA'), export_db_name=>'*******', inst_scn=>'371301226');"
"COMMIT; END;"
"CREATE TYPE "CLOBSTRINGAGGTYPE" TIMESTAMP '2015-06-01:13:37:41' OID '367CDD"
"7E59D14CF496B27D1B19ABF051' "
"AS OBJECT"
"("
" theString CLOB,"
" STATIC FUNCTION"
" ODCIAggregateInitialize(sctx IN OUT CLOBSTRINGAGGTYPE )"
" RETURN NUMBER,"
" MEMBER FUNCTION"
" ODCIAggregateIterate(self IN OUT CLOBSTRINGAGGTYPE, VALUE IN VARC"
"HAR2 )"
" RETURN NUMBER,"
" MEMBER FUNCTION"
" ODCIAggregateTerminate(self IN CLOBSTRINGAGGTYPE, returnValue OUT"
" CLOB, flags IN NUMBER)"
" RETURN NUMBER,"
" MEMBER FUNCTION"
" ODCIAggregateMerge(self IN OUT CLOBSTRINGAGGTYPE, ctx2 IN CLOBSTR"
"INGAGGTYPE)"
" RETURN NUMBER"
");"
"GRANT EXECUTE ON "CLOBSTRINGAGGTYPE" TO PUBLIC"
"GRANT DEBUG ON "CLOBSTRINGAGGTYPE" TO PUBLIC"
"CREATE OR REPLACE TYPE BODY CLOBSTRINGAGGTYPE"
I have written a Python script to demangle the output of imp show=y, but it cannot reliably demangle the output because it doesn't understand the complete Oracle SQL syntax.
dbms_metadata
Oracle has a dbms_metadata package which supports introspection of the database contents.
It's relatively easy to write an SQL statement which will retrieve the DDL for some but not all database objects. For example, the following statement will retrieve CREATE TABLE statements, but won't retrieve the corresponding privilege GRANTs on those tables.
select sub.*, dbms_metadata.get_ddl(sub.object_type, sub.object_name, sub.owner) sql
from (
select
created,
owner,
object_name,
decode(object_type,
'PACKAGE', 'PACKAGE_SPEC',
'PACKAGE BODY', 'PACKAGE_BODY',
'TYPE BODY', 'TYPE_BODY',
object_type
) object_type
from all_objects
where owner = :un
--These objects are included with other object types.
and object_type not in ('INDEX PARTITION','LOB','LOB PARTITION','TABLE PARTITION','DATABASE LINK')
--Ignore system-generated types that support collection processing.
and not (object_type like 'TYPE' and object_name like 'SYS_PLSQL_%')
) sub
Attempting to fetch the complete set of objects quickly leads down a very complex rabbit hole. (See "Reverse engineering object DDL and finding object dependencies" for more gory details.)
What else?
Any advice? I'm at a total loss for a sane and maintainable way to perform this seemingly indispensable database programming task.
Combine DBMS_DATAPUMP, Oracle Copy (OCP), and a simple shell script to create a one-click solution.
Sample Schema to Export
--Create test user.
drop user test_user cascade;
create user test_user identified by test_user;
create table test_user.table1(a number);
create view test_user.view1 as select 1 a from dual;
create or replace procedure test_user.procedure1 is begin null; end;
/
Create Directory and Procedure
Run these as steps as SYS. The definer's rights procedure runs as SYS. This way no roles or privileges need to be granted to any users.
--Create directory that will contain SQL file.
create directory ddl_directory as 'C:\temp';
grant read on directory ddl_directory to jheller;
--Create procedure that can only export one hard-coded schema.
--This is based on René Nyffenegger's solution here:
--dba.stackexchange.com/questions/91149/how-to-generate-an-sql-file-with-dbms-datapump
create or replace procedure sys.generate_ddl authid definer is
procedure create_export_file is
datapump_job number;
job_state varchar2(20);
begin
datapump_job := dbms_datapump.open(
operation => 'EXPORT',
job_mode => 'SCHEMA',
remote_link => null,
job_name => 'Export dump file',
version => 'LATEST');
dbms_output.put_line('datapump_job: ' || datapump_job);
dbms_datapump.add_file(
handle => datapump_job,
filename => 'export.dmp',
directory => 'DDL_DIRECTORY',
filetype => dbms_datapump.ku$_file_type_dump_file);
dbms_datapump.metadata_filter(
handle => datapump_job,
name => 'SCHEMA_LIST',
value => '''TEST_USER''');
dbms_datapump.start_job(
handle => datapump_job,
skip_current => 0,
abort_step => 0);
dbms_datapump.wait_for_job(datapump_job, job_state);
dbms_output.put_line('Job state: ' || job_state);
dbms_datapump.detach(datapump_job);
end create_export_file;
procedure create_sql_file is
datapump_job number;
job_state varchar2(20);
begin
datapump_job := dbms_datapump.open(
operation => 'SQL_FILE',
job_mode => 'SCHEMA',
remote_link => null,
job_name => 'Export SQL file',
version => 'LATEST');
dbms_output.put_line('datapump_job: ' || datapump_job);
dbms_datapump.add_file(
handle => datapump_job,
filename => 'export.dmp',
directory => 'DDL_DIRECTORY',
filetype => dbms_datapump.ku$_file_type_dump_file);
dbms_datapump.add_file(
handle => datapump_job,
filename => 'schema.sql',
directory => 'DDL_DIRECTORY',
filetype => dbms_datapump.ku$_file_type_sql_file);
dbms_datapump.start_job(
handle => datapump_job,
skip_current => 0,
abort_step => 0);
dbms_datapump.wait_for_job(datapump_job, job_state);
dbms_output.put_line('Job state: ' || job_state);
dbms_datapump.detach(datapump_job);
end create_sql_file;
begin
create_export_file;
create_sql_file;
end;
/
--Grant to users.
grant execute on generate_ddl to jheller;
Setup OCP on the Client
Files on an Oracle directory can be easily transferred to a client PC using OCP as described in this answer. The setup
is a bit tricky - download the precise version of the program and the instant client and unzip them into the same directory. I think I also had some problems with
a VC++ redistributable or something the first time.
Commands to Run
Now the easy part - creating and moving the files is done in two simple steps:
execute sys.generate_ddl;
C:\Users\jonearles\Downloads\ocp-0.1-win32>ocp jheller/jheller#orcl12 DDL_DIRECTORY:schema.sql schema.sql
Sample Output
This script contains a lot of weird things. Some weird extra commands that nobody will understand, and some weird options that nobody will understand. That's probably one of the reasons this seemingly obvious feature is so difficult - due to the thousands of odd features, it's impossible to have output that is both understandable and completely unambiguous.
CREATE TABLE "TEST_USER"."TABLE1"
( "A" NUMBER
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "USERS" ;
...
-- new object type path: SCHEMA_EXPORT/PROCEDURE/PROCEDURE
-- CONNECT TEST_USER
CREATE EDITIONABLE procedure procedure1 is begin null; end;
/
...
-- new object type path: SCHEMA_EXPORT/VIEW/VIEW
CREATE FORCE EDITIONABLE VIEW "TEST_USER"."VIEW1" ("A") AS
select 1 a from dual
;