What is the max value you can set ERRORS to for Oracle SQL*loader? - oracle11g

Straight forward question ..
The documentation for Oracle 10 states:
Oracle 10g sql*loader documentation
(Note, I linked to 10g since it was most convenient, I'll take an answer for Oracle 10 and/or Oracle 11, either way is fine - I suspect it'll be the same answer though - so I added both tags).
ERRORS (errors to allow)
Default: To see the default value for this parameter, invoke SQLLoader without any parameters, as described in Invoking SQLLoader.
ERRORS specifies the maximum number of insert errors to allow. If the number of errors exceeds the value specified for ERRORS, then SQL*Loader terminates the load. To permit no errors at all, set ERRORS=0. To specify that all errors be allowed, use a very high number.
(Emphasis mine).
So, since Oracle handles up to NUMBER(38) .. I tried:
ERRORS=999999999999999999999999999999999999
(36 digits)
and promptly got this error:
SQL*Loader-100: Syntax error on command-line
Trying a much smaller number:
ERRORS=999999
works fine.
So what's the maximum value you can use here ?
I don't find it in the documentation, so not sure if I'm looking in the wrong place, or it's just not in there :)
And yeah, I need a LARGE number, I'm loading a multi-million row file, so I'd like to use the largest possible to avoid any future issues.

IMHO sqlldr is not supporting number(39). I think all number parameter in sql loader are integer data type. And common limits for integer is 2147483647.
sqlldr xxxx control=ctl.ctl errors=2147483648 -> exception
sqlldr xxxx control=ctl.ctl errors=2147483647 -> works fine

I solved the problem setting
errors=-1
worked fine on Oracle 11g

Related

sqlite3_key missing from header in iOS 11 framework [duplicate]

I am working on SQLite File Encryption. I have added sqlCipher & crypto frameworks successfully in my project.
Now when I try to compile my application on this line
int rc = sqlite3_key(database, [key UTF8String], strlen([key UTF8String]));
it says Implicit declaration of function 'sqlite3_key'
So above line "implicit declaration" sounds to me like function is defined but not declared. But where I have to declared ?
While searching over Internet, under this article, it says like SQLite Encryption Extension(SEE) is not available publically. I have to purchase it of cost around $2000.
SEE -> http://www.hwaci.com/sw/sqlite/see.html
So this is the only reason I am getting Implicit declaration & False response while sqlite encryption process ?
If you are using SQLCipher, you need to define SQLITE_HAS_CODEC in your application's C Flags. Thats all.
Yes, that is the reason you are getting that compiler warning. The function sqlite3_key() is not defined in the version of libsqlite3 included with iOS. Adding in a function declaration isn't going to help-- it would fix that compiler warning, but it would just mean you'll get a linker error since the function isn't defined anywhere.
If you purchased SEE you could probably build your own copy of SQLite, embed it in your app, and just not use the system's libsqlite3. That this would mean you'd have to say "yes" when the app store submission process asks if your app includes encryption, meaning extra paperwork and time before you could submit the app. I'm not certain whether there's any clear indication of whether Apple would accept it even then-- probably they would, but they've been known to surprise people.

OpenJpa2.0 How to map Oracle sys.XMLTYPE column to String

I changed Change in persistence.xml
I also changed column definition (columnDefinition="XDB.XMLType") for xml fields
I checked OpenJpa(http://openjpa.208410.n2.nabble.com/Oracle-XMLType-fetch-problems-td6208344.html) site and IBM (http://www.ibm.com/support/knowledgecenter/SS7J6S_7.5.0/com.ibm.wsadapters.jca.jdbc.doc/env/doc/rjdb_problemsolutions.html)
My env is OpenJpa 2.0 and WAS 7
its throwing exception
org.apache.openjpa.persistence.PersistenceException: ORA-06502: PL/SQL: numeric or value error: character string buffer too small
ORA-06512: at "SYS.XMLTYPE", line 169
Please suggest without changing OpenJpa2.0 as its part of IBM WebSphere Application Server V7.0 how can i handle sys.XMLTYPE data, i am migrating my application from db2 to Oracle in same environment.
Writing XML data can be tricky some times! Getting the correct drivers and things defined properly can have its challenges. I can not say exactly what you need to do given the lack of info on your domain model and such, but let me give some general things to look for. First, there is an XML test in the OpenJPA test framework if you want to make reference to it. It can be seen publicly here:
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/jdbc/oracle/
Or, another test using an "XMLValueHandler" (likely this is beyond the scope of what you are looking for):
https://apache.googlesource.com/openjpa/+/refs/heads/trunk/openjpa-persistence-jdbc/src/test/java/org/apache/openjpa/persistence/xmlmapping/query/
Second, (stating the obvious) I assume you have a column in Oracle defined as "XMLTYPE". Also, I see you are using schema SYS. I'm sure you are aware but this is a system/admin schema......just for sanity sake you might want to first get things running using a non-system/admin schema just so we don't get hung up with any issues with your OpenJPA client not having the correct permissions.
Next, you need the following definition:
#Lob #Basic
#Column(name = "ANXMLCOLUMN", columnDefinition="XMLCOLUMN XMLType")
private String anXMLString;
The #Lob I think will be necessary if you are using data greater than 4000 chars (this was mentioned in one of the comments). To start I'd use a very small set of data (a couple characters), once that works, then experiment with > 4k.
Next, make sure to use the correct JDBC driver. The last time I experimented with an XMLType I used the Oracle JDBC 11.2.0.2 driver.
Finally, you might need to use the property "openjpa.jdbc.DBDictionary" with value "oracle(supportsSetClob=true,maxEmbeddedClobSize=-1)". Again, experiment with this AND look at the OpenJPA documentation on these properties to determine if they are necessary in your scenario. I think the supportsSetClob=true will only be necessary for older version (pre-2.2.x) of OpenJPA. You might also need to use property "openjpa.jdbc.SchemaFactory" with value "native". I would suggest you first try without either or these two properties. If that doesn't help, then experiment with these two properties. I know this is vague, but I don't know what your DDL or domain model looks like so I have to keep in vague.
Thanks,
Heath Thomann

Ada83 Constraint Error not Present in Watch Window

I'm looking at a constraint error when running this code. In the debugger it halts on the 2nd line (Menu_Text...). I put the code on the RHS of the assignment into the watch window and I see no problem. It's evaluates exactly as it should.
for I in 1..This_Info_Ptr.Child_Menu_Length loop
Menu_Text := This_Info_Ptr.Child_Menu_Text_Ptr.all(I-1);
Menu_State := This_Info_Ptr.Child_Menu_States_Ptr.all(I-1);
...
The error is when I is 1. I have confirmed that this code works in the watch window:
This_Info_Ptr.Child_Menu_Text_Ptr.all(I-1)
Child_Menu_Text_Ptr and Child_Menu_States_Ptr point to arrays (of strings and enums),
How should I debug this I general? I can't see anything wrong with the code. However, my familiarity with Ada access types is limited. The ..._Ptr variables are access types.
I am using GNAT.
Assuming the arrays being indexed are all of the same dimension and have the same index type, then iterating over them should be done using the 'first & 'last or 'range attributes.
It is likely that the hand coded control of length values has a bug.
Using the inbuilt functionality is safer and more reliable.
You aren't showing enough source text to tell us for sure what is happening. IsThis_Info_Ptr.Child_Menu_Text_Ptr.all a function or an array? What is it's specification?
If it is an array, you should remember that Ada allows arrays to be indexed by any discrete type, and that arrays in Ada always know their own indexing bounds.

How to suppress -mmax value exceeded.Automatically increasing from old value to new value.<5409>?

in prokb,its mentioned
In 10.0B02 and above, the client session startup parameter -noincrwarn was reintroduced
to allow the selective suppression of the above four warning messages ONLY. Since the
execution of the 4GL statement: SESSION:SUPPRESS-WARNINGS = YES. suppresses ALL warning
messages during the session.
Where and how could i set i this startup parameter -noincrwarn to suppress this warning
message?
"SESSION:SUPPRESS-WARNINGS = YES." doesn't do much of anything useful. Or at least it didn't the last time I tested it.
The -mmax warning is harmless. It is a "soft" limit that is dynamically allocated and expanded as needed. You can ignore it. Or if the .lg file entries really bother you, you can simply increase it to a reasonable value. I routinely set it to 8192 for character sessions, 32768 for Windows. The default, as JensD says, is ludicrously low.
Startup parameters, such as -noincwarn, can set in a number of ways:
1) Via the command line. If your application starts via a script it will eventually invoke progress via "pro", "mpro", progress, prowin32, proapsv or some other executable (you can potentially link your own objects and create custom executables...) The command line that invokes Progress will have a number of parameters. You could add it there. Windows example:
#echo off
set DLC=\Progress\OpenEdge
%DLC%\bin\prowin32 -db mydb -p start.p -noincwarn
(On windows it is also common for the shortcut properties to have the command line listed.)
2) In a "pf" file. "PF" files are parameter files. They contain a list of parameters in a text file. This makes it easy to share and manage parameters between many scripts. To use a parameter file you need at least one -pf filename.pf parameter. Unix example:
#!/bin/sh
DLC=/usr/dlc
export DLC
${DLC}/bin/_progres -db mydb -pf mypf.pf
Where "mypf.pf" might contain:
# mypf.pf
-p start.p
-noincwarn
There is a global .pf file in the Progress install directory called startup.pf. You could also add it to that.
3) In an "ini file". Sort of like the pf file but more complicated. Indicated by the -ininame startup parameter. Can also be influenced by registry keys.
Why not removing or trying another value for -mmax? If you're moving from an old version of Progress it might be that -mmax is set very low.
The Maximum Memory (-mmax) client session parameter specifies the maximum amount of memory allocated for r-code segments, in kilobytes.
Source: http://knowledgebase.progress.com/articles/Article/P11351?popup=true
Large memory consumption might depend on complicated business logic (things like very large and or deeply nested procedures) so you might consider looking into that.
However a much easier fix would be to increase the value. Default is 3096, meaning each client "only" gets 3 Mb for this. Not a very large amount with today's standards.
If you really only want to suppress the message. Set -noincrwarn in your client side startup script (or corresponding .pf-file/startup.pf).
Hosting a WPF element (windows Presentation Foundation) in an OpenEdge application can cause application to crash if any message cover the window. It is also the case of this message.
In order to suppress any messages including message 5409 ()
According to article "HOW TO SUPPRESS WARNING MESSAGES (5407),(5408),(5409),(5410) FROM DISPLAYING ON CLIENT SCREENS."
I used with expected results SESSION:SUPPRESS-WARNINGS = YES. As the first line in the starting procedure of the aplication.
Using -noincrwarn as the session startup parameter had no effect in Open Edge 11.4
Supress openedge messages:
http://knowledgebase.progress.com/articles/Article/P79795?popup=true
.NET related error for OpenEdge-WPF hibrid application "Invisible or disabled control cannot be activated"
https://social.msdn.microsoft.com/Forums/windows/en-US/e8cf6431-2a59-4335-8b36-fc8f35083823/invisible-or-disabled-control-cannot-be-activated?forum=winforms

Attempting to deploy a binary to a location where a different binary is already stored

When I am publishing my page from tridio 2009, I am getting the error below:
Destination with name 'FTP=[Host=servername, Location=\RET, Password=******, Port=21, UserName=retftp]' reported the following failure:
A processing error occurred processing a transport package Attempting to deploy a binary [Binary id=tcm:553-974947-16 variantId= sg= path=/Images/image_thumbnail01.jpg] to a location where a different binary is already stored Existing binary: tcd:pub[553]/binarymeta[974950]
Below is my code snippet
Component bigImageComp = th.GetComponentValue("bigimage", imageMetaFields);
string bigImagefileName = string.Empty;
string bigImagePath = string.Empty;
bigImagefileName = bigImageComp.BinaryContent.Filename;
bigImagePath = m_Engine.AddBinary(bigImageComp.Id, TcmUri.UriNull, null, bigImageComp.BinaryContent.GetByteArray(), Path.GetFileName(bigImagefileName));
imageBigNode.InnerText = bigImagePath;
Please suggest
Chris Summers addressed this on his blog. Have a read of the article - http://www.urbancherry.net/blogengine/post/2010/02/09/Unique-binary-filenames-for-SDL-Tridion-Multimedia-Components.aspx
Generally in Tridion Content Delivery we can only keep one version of a Component. To get multiple "versions" of a MMC we have to publish MMC as variants. By this way we can produce as many variants as we need via templating.
You can refer below article for more detail:
http://yatb.mitza.net/2012/03/publishing-images-as-variants.html#!/2012/03/publishing-images-as-variants.html
When adding binaries you must ensure that the file and it's metadata is unique. If one of the values e.g. the filename appears to be the same but the rest of the metadata does not match, then deployment will fail.
In the given example (as Nuno points out) the binary 910 is trying to deploy over binary 703. The filename is the same but the binary is identified to be not the same (in the case a different ID from the same publication). For this example you will need to rename one of the binaries (either the file itself or change the path) and everything will be fine.
Other scenarios can be that the same image is used from two different templates and the template id is used as the varient ID. If this is the case it is the same image BUT the varient ID check fails so to avoid overwriting the same image the deployer fails it.
Often unpublishing can help, however, the image is only removed when ALL references to it are removed. So if it is used from more than one place there are more open references.
This is logical protection from the deployer. You would not want the wrong image replacing another and either upsetting the layout or potentially changing the content to another meeting (think advertising banner).
This is actual cause and reason for above problem (Something got from forum)

Resources