OpenJpa2.0 How to map Oracle sys.XMLTYPE column to String - oracle11g

I changed Change in persistence.xml
I also changed column definition (columnDefinition="XDB.XMLType") for xml fields
I checked OpenJpa(http://openjpa.208410.n2.nabble.com/Oracle-XMLType-fetch-problems-td6208344.html) site and IBM (http://www.ibm.com/support/knowledgecenter/SS7J6S_7.5.0/com.ibm.wsadapters.jca.jdbc.doc/env/doc/rjdb_problemsolutions.html)
My env is OpenJpa 2.0 and WAS 7
its throwing exception
org.apache.openjpa.persistence.PersistenceException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.XMLTYPE", line 169
Please suggest without changing OpenJpa2.0 as its part of IBM WebSphere Application Server V7.0 how can i handle sys.XMLTYPE data, i am migrating my application from db2 to Oracle in same environment.

Writing XML data can be tricky some times! Getting the correct drivers and things defined properly can have its challenges. I can not say exactly what you need to do given the lack of info on your domain model and such, but let me give some general things to look for. First, there is an XML test in the OpenJPA test framework if you want to make reference to it. It can be seen publicly here:
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/jdbc/oracle/
Or, another test using an "XMLValueHandler" (likely this is beyond the scope of what you are looking for):
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/xmlmapping/query/
Second, (stating the obvious) I assume you have a column in Oracle defined as "XMLTYPE". Also, I see you are using schema SYS. I'm sure you are aware but this is a system/admin schema......just for sanity sake you might want to first get things running using a non-system/admin schema just so we don't get hung up with any issues with your OpenJPA client not having the correct permissions.
Next, you need the following definition:
#Lob #Basic
#Column(name = "ANXMLCOLUMN", columnDefinition="XMLCOLUMN XMLType")
private String anXMLString;
The #Lob I think will be necessary if you are using data greater than 4000 chars (this was mentioned in one of the comments). To start I'd use a very small set of data (a couple characters), once that works, then experiment with > 4k.
Next, make sure to use the correct JDBC driver. The last time I experimented with an XMLType I used the Oracle JDBC 11.2.0.2 driver.
Finally, you might need to use the property "openjpa.jdbc.DBDictionary" with value "oracle(supportsSetClob=true,maxEmbeddedClobSize=-1)". Again, experiment with this AND look at the OpenJPA documentation on these properties to determine if they are necessary in your scenario. I think the supportsSetClob=true will only be necessary for older version (pre-2.2.x) of OpenJPA. You might also need to use property "openjpa.jdbc.SchemaFactory" with value "native". I would suggest you first try without either or these two properties. If that doesn't help, then experiment with these two properties. I know this is vague, but I don't know what your DDL or domain model looks like so I have to keep in vague.
Thanks,
Heath Thomann

Related

How to run liquibase changelogSyncSQL or changelogSync up to a tag and/or labels?

I'm adding liquibase to an existing project where tables and data already exist. I would like to know how I might limit the scope of changelogSync[SQL] to a subset of available changes.
Background
I've run liquibase generateChangeLog to capture the current state and placed this into say src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__init01.yaml.
I've also added another changeset to cover some new requirements in a new file. Let's call it src/main/resources/db/changelog/changes/V2021.04.13.00.00.00__new-feature.yaml.
I've added a main changelog file src/main/resources/db/changelog/db.changelog-master.yaml with the following contents:
databaseChangeLog:
- includeAll:
path: changes
relativeToChangelogFile: true
I now want to ensure that when I run liquibase changelogSync[SQL] against a particular version of the db that the scope is limited to the first changelog init01, thereby allowing from that point on a liquibase update or updateToTag et al, to continue with changes following init01.
I'm surprised to see that the changelogSync[SQL] commands don't seem to offer some way (that I can see from the docs for how to do this.
Besides printing the SQL and manually changing it, is there something I've missed? Any suggested approaches welcome. Thanks!
What about changelogSyncToTagSQL ?
Wouldn't it cover your needs?
Or maybe you could try changelogSyncSQL with additional parameters "label" or and "context" ?
changelogSyncToTagSQL
context
labels
As it stands, the only solution I've found is to generate the SQL and then manually edit its contents to filter the change sets which don't correspond to your current schema.
Then you can apply the sql to the db.

What is the max value you can set ERRORS to for Oracle SQL*loader?

Straight forward question ..
The documentation for Oracle 10 states:
Oracle 10g sql*loader documentation
(Note, I linked to 10g since it was most convenient, I'll take an answer for Oracle 10 and/or Oracle 11, either way is fine - I suspect it'll be the same answer though - so I added both tags).
ERRORS (errors to allow)
Default: To see the default value for this parameter, invoke SQLLoader without any parameters, as described in Invoking SQLLoader.
ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value specified for ERRORS, then SQL*Loader terminates the load. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.
(Emphasis mine).
So, since Oracle handles up to NUMBER(38) .. I tried:
ERRORS=999999999999999999999999999999999999
(36 digits)
and promptly got this error:
SQL*Loader-100: Syntax error on command-line
Trying a much smaller number:
ERRORS=999999
works fine.
So what's the maximum value you can use here ?
I don't find it in the documentation, so not sure if I'm looking in the wrong place, or it's just not in there :)
And yeah, I need a LARGE number, I'm loading a multi-million row file, so I'd like to use the largest possible to avoid any future issues.
IMHO sqlldr is not supporting number(39). I think all number parameter in sql loader are integer data type. And common limits for integer is 2147483647.
sqlldr xxxx control=ctl.ctl errors=2147483648 -> exception
sqlldr xxxx control=ctl.ctl errors=2147483647 -> works fine
I solved the problem setting
errors=-1
worked fine on Oracle 11g

File Exists and Exception Handling in U-Sql

Two questions
How to check file exists or not before EXTRACT?
we have scenario where new inputs file is generated every day for catalog data. we need to merge new input with d-1 file. before merge we what to make sure that new input file exists at source location
does u-sql supports try...catch block?
Regarding checking if a file exists. We recently released a compile-time IF statement that indeed can check for partition existence (and other objects such as files and tables are on the roadmap).
Once that feature is released (still one or two refreshs out at the time of this answer) it may look something like (syntax subject to change):
IF FILE.EXISTS("/mydir/myfile.csv") THEN
#data = EXTRACT ... FROM "/mydir/myfile.csv" USING ...;
...
#jobstate = SELECT * FROM (VALUES("job completed")) AS T(status);
ELSE
#jobstate = SELECT * FROM (VALUES("file not ready. Job not executed.")) AS T(status);
END;
OUTPUT #jobstate TO "/jobs/myjobstate.csv" USING Outputters.Csv();
You will be able to provide the name as a parameter as well. Please let me know if that will work for your scenario.
An other alternative is to use the file set syntax, especially if you want to use a dynamic value to determine the process. That would simply create an empty rowset:
#data = EXTRACT ..., date DateTime
FROM "/mydir/{date:yyyy}/{date:MM}/{date:dd}/data.csv"
USING ...;
#data = SELECT * FROM #data WHERE date == DateTime.Now.AddDays(-1);
... // continue processing #data that is empty if yesterday's file is not yet there
Having said that, you may want to check of your job orchestration framework (such as ADF) may be a better place to check for existence before submitting the job in the first place.
As to the try catch block: U-SQL itself is a script-level optimizable, declarative language where the plan gets generated and optimized at runtime over the whole script. Thus providing a dynamic TRY-CATCH is currently not available, since it would severely impact the ability to optimize the script (e.g., you cannot move predicates or column pruning outside of a try-catch block). Also TRY/CATCH can lead to some very hard to understand and debug code, especially if it is used to mimic procedural workflows in an otherwise declarative environment.
However, you can use try/catch inside your C# functions without problems if you need to catch C# runtime errors.
FILE.EXISTS() always returns True when executed locally. However, it works when executing against Azure Data Lake.
Tried MSDN example and the following returns True, True
DECLARE #filepath_good = "/Samples/Data/SearchLog.tsv";
DECLARE #filepath_bad = "/Samples/Data/zzz.tsv";
#result =
SELECT FILE.EXISTS(#filepath_good) AS exists_good,
FILE.EXISTS(#filepath_bad) AS exists_bad
FROM (VALUES (1)) AS T(dummy);
OUTPUT #result
TO "/Output/FileExists.txt"
USING Outputters.Csv();
I have Microsoft Azure Data Lake Tools for Visual Studio version 2.2.5000.0

Attempting to deploy a binary to a location where a different binary is already stored

When I am publishing my page from tridio 2009, I am getting the error below:
Destination with name 'FTP=[Host=servername, Location=\RET, Password=******, Port=21, UserName=retftp]' reported the following failure:
A processing error occurred processing a transport package Attempting to deploy a binary [Binary id=tcm:553-974947-16 variantId= sg= path=/Images/image_thumbnail01.jpg] to a location where a different binary is already stored Existing binary: tcd:pub[553]/binarymeta[974950]
Below is my code snippet
Component bigImageComp = th.GetComponentValue("bigimage", imageMetaFields);
string bigImagefileName = string.Empty;
string bigImagePath = string.Empty;
bigImagefileName = bigImageComp.BinaryContent.Filename;
bigImagePath = m_Engine.AddBinary(bigImageComp.Id, TcmUri.UriNull, null, bigImageComp.BinaryContent.GetByteArray(), Path.GetFileName(bigImagefileName));
imageBigNode.InnerText = bigImagePath;
Please suggest
Chris Summers addressed this on his blog. Have a read of the article - http://www.urbancherry.net/blogengine/post/2010/02/09/Unique-binary-filenames-for-SDL-Tridion-Multimedia-Components.aspx
Generally in Tridion Content Delivery we can only keep one version of a Component. To get multiple "versions" of a MMC we have to publish MMC as variants. By this way we can produce as many variants as we need via templating.
You can refer below article for more detail:
http://yatb.mitza.net/2012/03/publishing-images-as-variants.html#!/2012/03/publishing-images-as-variants.html
When adding binaries you must ensure that the file and it's metadata is unique. If one of the values e.g. the filename appears to be the same but the rest of the metadata does not match, then deployment will fail.
In the given example (as Nuno points out) the binary 910 is trying to deploy over binary 703. The filename is the same but the binary is identified to be not the same (in the case a different ID from the same publication). For this example you will need to rename one of the binaries (either the file itself or change the path) and everything will be fine.
Other scenarios can be that the same image is used from two different templates and the template id is used as the varient ID. If this is the case it is the same image BUT the varient ID check fails so to avoid overwriting the same image the deployer fails it.
Often unpublishing can help, however, the image is only removed when ALL references to it are removed. So if it is used from more than one place there are more open references.
This is logical protection from the deployer. You would not want the wrong image replacing another and either upsetting the layout or potentially changing the content to another meeting (think advertising banner).
This is actual cause and reason for above problem (Something got from forum)

Schema qualified tables with SQLAlchemy, SQLite and Postgresql?

I have a Pylons project and a SQLAlchemy model that implements schema qualified tables:
class Hockey(Base):
__tablename__ = "hockey"
__table_args__ = {'schema':'winter'}
hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True)
baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id'))
This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support)
sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' ()
I'd like to continue using SQLite for dev and testing.
Is there a way of have this fail gracefully on SQLite?
I'd like to continue using SQLite for
dev and testing.
Is there a way of have this fail
gracefully on SQLite?
It's hard to know where to start with that kind of question. So . . .
Stop it. Just stop it.
There are some developers who don't have the luxury of developing on their target platform. Their life is a hard one--moving code (and sometimes compilers) from one environment to the other, debugging twice (sometimes having to debug remotely on the target platform), gradually coming to an awareness that the gnawing in their gut is actually the start of an ulcer.
Install PostgreSQL.
When you can use the same database environment for development, testing, and deployment, you should.
Not to mention the QA team. Why on earth are they testing stuff they're not going to ship? If you're deploying on PostgreSQL, assure the quality of your work on PostgreSQL.
Seriously.
I'm not sure if this works with foreign keys, but someone could try to use SQLAlchemy's Multi-Tenancy Schema Translation for Table objects. It worked for me but I have used custom primaryjoin and secondaryjoinexpressions in combination with composite primary keys.
The schema translation map can be passed directly to the engine creator:
...
if dialect == "sqlite":
url = lambda: "sqlite:///:memory:"
execution_options={"schema_translate_map": {"winter": None, "summer": None}}
else:
url = lambda: f"postgresql://{user}:{pass}#{host}:{port}/{name}"
execution_options=None
engine = create_engine(url(), execution_options=execution_options)
...
Here is the doc for create_engine. There is a another question on so which might be related in that regard.
But one might get colliding table names all schema names are mapped to None.
I'm just a beginner myself, and I haven't used Pylons, but...
I notice that you are combining the table and the associated class together. How about if you separate them?
import sqlalchemy as sa
meta = sa.MetaData('sqlite:///tutorial.sqlite')
schema = None
hockey_table = sa.Table('hockey', meta,
sa.Column('score_id', sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True),
sa.Column('baseball_id', sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')),
schema = schema,
)
meta.create_all()
Then you could create a separate
class Hockey(Object):
...
and
mapper(Hockey, hockey_table)
Then just set schema above = None everywhere if you are using sqlite, and the value(s) you want otherwise.
You don't have a working example, so the example above isn't a working one either. However, as other people have pointed out, trying to maintain portability across databases is in the end a losing game. I'd add a +1 to the people suggesting you just use PostgreSQL everywhere.
HTH, Regards.
I know this is a 10+ year old question, but I ran into the same problem recently: Postgres in production and sqlite in development.
The solution was to register an event listener for when the engine calls the "connect" method.
#sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
dbapi_connection.execute('ATTACH "your_data_base_name.db" AS "schema_name"')
Using ATTACH statement only once will not work, because it affects only a single connection. This is why we need the event listener, to make the ATTACH statement over all connections.

Resources