When does an Oracle Package Specification become INVALID - plsql

As far as I know a package body can be replaced and recompiled without affecting the specification. A package specification declares procedures and functions, not defines them, so they can not reference objects, that can make the package specification INVALID.
I know that a package specification can reference objects when it uses stand-alone subprograms and other packages to define it's variables. In this case changing referenced objects may cause the specification invalidation.
Is there any other way how an Oracle package specification can depend on (reference) objects and become INVALID whether when referenced objects chаnge or another way?

In specification there can be defined variable or type. If variable is table.column%type package specification can be affected by any ddl operation on the table used for defining variable. The same situation is when in package header we define cursor.
I would also be careful with synonyms swapping both in case of table referenced by variable definition and type used in header.
Next scenario are privileges. If owner of package will loose some grants (lets say due to table recreating) package spec can also go invalid.
I hope what I'm writing make sense.

Related

Is there a "correct" way to use exported data internally in an R package?

I know that exported data (access to users) belongs in the data/ folder and that internal data (data used internally by package functions) belongs in R/sysdata.rda. However, what about data I wish to both export to the user AND be available internally for use by package functions?
Currently, presumably due to the order in which objects/data are added to the NAMESPACE, my exported data is not available during devtools::check() and I am getting a NOTE: no visible binding for global variable 'data_x'.
There are probably a half dozen ways to get around this issue, many of which appear to me as rather hacky, so I was wondering if there was a "correct" way to have BOTH external and internal data (and avoid the NOTE from R CMD check).
So far I see these options:
write an internal function that calls the data and use that everywhere internally
Use the ':::' to access the data; which seems odd and invokes a different warning
Have a copy of data_x in BOTH data/ and R/sysdata.rda (super hacky)
Get over it and ignore the NOTE
Any suggestions greatly appreciated,
Thx.

Is it appropriate to rely on Meteor.settings in package autotest?

My package relies on Meteor.settings, a certain property that has to contain certain values. Whether it is appropriate to check for it in the auto test of the package? Depending on settings, the package may or may not throw Meteor.Errors and return various results.

Visualization of xsd Dependencies?

I have a bunch of XSD Files which I did not write myself. The files sometimes import each other:
<xs:import namespace="http://www.mysite.com/xmlns/xXX-YYYY/V" schemaLocation="http://www.mysite.com/xmlns/xXX-YYYY/V/schema_A.xsd"/>
and I would like to get an overview of the dependencies without having to read through all of them.
The URI specified by schemaLocation does not exist, instead a catalog.xml File is used to resolve the schema locations.
http://de.wikipedia.org/wiki/XML_Catalogs
Can anybody recommend a tool that can visualize the dependencies of my schemas by also processing the information given in the catalog.xml file?
Thanks
Mischa
To follow up on my comment...
I am not aware of any tool that takes into account OASIS catalog files. Have a look at this response, see if it supports what you need (and your platform).
Strictly speaking, there are a number of issues with dependencies diagrams, which is why such a question should be qualified with why do you want it.
Some think that it truly shows dependencies between XSD files; it is not true: it may show what the author thinks the dependencies are, but that wouldn't be what the processor actually agrees to. "schemaLocation" is just a hint that processors may or may not use: "may not" use it if they're instructed otherwise (well known XSDs could be cached internally, through catalog entries or any other proprietary "catalogs"), or because the processor may decide there is no need to load an external reference when there's no use for it anyway (it may happen in some corner cases).
A diagram built as described by explicit schema locations is definitely easier to do. It only shows what the author intended; it doesn't mean that it is the "real one" (as in content is pulled indirectly, which makes the whole XSD set valid, while individual XSDs, open independently of the set, would be invalid).
Trying to build a diagram where dangling or non existing schemaLocation are overridden through a catalog, is way harder, due to the multitude of ways to structure the content, and the resolution mechanism. It would have the same shortcoming as the one above (except now the author is the one of the catalog file, rather than who authored the XSDs).
The "true" dependency can be built by traversing a schema set already loaded and compiled. Even then, you would still need to define criteria regarding dependencies due to substitutable components (elements in substitution groups or derived types, through the use of the xsi:type attribute). That is even harder.
Take a look at this tool: DocFlex/XML XSDDoc.
It is an XML schema documentation generator.
It doesn't visualize xsd dependencies, but it does work with XML catalogs.
The overview of each XSD file lists all other XSD files referenced from it
(i.e. imported, included or redefined).
There is also an opposite list of those schemas that reference the given one.
So, you can use it to figure out which XSD files depend on which.
At least, that will be easier than reading raw XSD files.
As an example, here is a documentation generated with that tool:
XML Schemas for DITA 1.1. It has been generated basically by two files:
http://docs.oasis-open.org/dita/v1.1/OS/schema/ditaarch.xsd
http://docs.oasis-open.org/dita/v1.1/OS/schema/catalog.xml
ditaarch.xsd is the schema driver that pulls all other schemas (25 in total); catalog.xml is the XML catalog, via which all file references are resolved.
What is specified in schemaLocation attributes in those schemas themselves are just opaque URIs.

Is it necessary to export base method extensions in an R package? Documentation implications?

In principle, I could keep these extensions not-exported, and this would also allow me to not-add redundant documentation for these already well-documented methods, while still also passing R CMD check myPackage without any reported WARNINGs.
What are some of the drawbacks, if any? Is this possibly recommended to keep extensions of basic methods compartmentalized within the package that defines them? Alternatively, will this make it more difficult for another package to depend on mine, if certain core method-extensions are not exported?
For example, if I don't document and don't export the following:
setMethod("show", "myPackageSpecialClass", function(object){ show(NA) })
I'm trying to flesh-out some of these finer details of best-practices with namespaces and base method extensions.
If you don't export the methods, then users (either at the command line or trying to use your classes and methods in their own package via imports) won't be able to use them -- your class will be displayed with the show,ANY-method.
You are not documenting the generic show, but rather the method appropriate for your class, show,myPackageSpecialClass-method. If in your NAMESPACE you
import(methods)
exportMethods(show)
(note that there is no way to export just some methods on the generic show) and provide no documentation, R CMD check will complain
* checking for missing documentation entries ... WARNING
Undocumented S4 methods:
generic 'show' and siglist 'myPackageSpecialClass'
All user-level objects in a package (including S4 classes and methods)
should have documentation entries.
See the chapter 'Writing R documentation files' in the 'Writing R
Extensions' manual.
Your example (I know it was not meant to be a serious show method :) ) is a good illustration for why methods might be documented -- explaining to the user why every time they try and display the object they get NA, when they were expecting some kind of description about the object.
One approach to documentation is to group methods with the class into a single Rd file, myPackageSpecialClass-class.Rd. This file would contain an alias
\alias{show,myPackageSpecialClass-method}
and a Usage
\S4method{show}{myPackageSpacialClass}(object)
This works so long as no fancy multiple dispatch is used, i.e., it is clear to which class a method applies. If the user asks for help with ?show, they are always pointed toward the methods package help page. For help on your methods / class, they'd need to ask for that specific type of help. There are several ways of doing this but my favorite is
class ? myPackageSpecialClass
method ? "show,myPackageSpecialClass"
This will not be intuitive to the average user; the (class|method) ? ... formulation is not widely used, and the specification of "generic,signature" requires a lot of understanding about how S4 works, including probably a visit to selectMethod(show, "myPackageSpecialClass") (because the method might be implemented on a class that myPackageSpecialClass inherits from) or showMethods(class="myPackageSpecialClass", where=getNamespace("myPackage")) (because you're wondering what you can do with myPackageSpecialClass).

Where to hold PL/SQL constants?

Where do you normally store your PL/SQL constants? On the package-body level? In the specification? I've also seen some people holding constants in a specialized package just for constants. What are best practices in this area?
Thanks.
One downside to having constants in a package body or spec is that when you recompile the package, any user sessions that had the package state in the PGA would get ORA-04068. For this reason, in one large development environment we adopted the convention of having a separate spec-only package to hold the constants (and package globals if any) for each package. We'd then impose a rule saying that these spec-only packages were only allowed to be referenced by their "owning" package - which we enforced at code review. Not a perfect solution, but it worked for us at the time.
For the same reason, I'd never recommend one-constant-package-to-rule-them-all because every time someone needs to introduce a new constant, or modify an existing one, all user sessions get ORA-04068.
In many cases, you want to keep them in the specification so other packages can use them, especially as parameters when calling functions and procedures from your package.
Only when you want to keep them private to the package, you should put them int the body.
Having a package just for constants might be a good idea for those constants that are not related to any piece of code in particular, but relevant for the whole schema.
For our application, all constants are in a table. A simple function is used to extract them. No problem with recompilation, ORA-04068, ...
Best option in my opinion. Store the "constant" in a table and create a generic function to get the values. No 04068 😀
I would prefer the constants to be, by default, in the package body unless you use the constant as a parameter value for one of your public package procedures/functions or as a return value for your functions.
The problem with putting your constants in the package specification is that if you need to change the constant's type, other packages might fail that use the constant because it was just there. If the constant was private in the first place, then you don't need to perform an impact analysis for each change.
If you need to store contants like default language or stuff like that, then I would encapsulate those contants in functions like get_default_language etc. and keep the constants private.
I'd have a concern about having "one package to rule the constants" because package state -- constants, variables, and code -- get cached in the user's PGA at first invocation of any public variable or package. A package constant, if public, should be scoped to the package, and used by only by the methods of the package.
A constant whose scope spans packages should be in a code table with a description, joined in as required. Constant's aren't and variables don't, after all. Having a key-value-pair table of "constants" makes them all public, and makes changing them dynamically possible.
What if we use parameterless function named the same way as constant instead of using constant in the package. In that case we can add new function/constant to the package, change returning value of function or even remove some function from the package freely and we'll not get ORA 04068 after recompile it.
Inside implementation part of the function in the package body we can use constant as a returning value, though it's unnecessarily because we obviously can't change returning value. Also we can use in function signature some special performance technics, such as deterministic or maybe even result cash.
As a positive side effect we get ability to use constant in sql queries.

Resources