Where to hold PL/SQL constants? - plsql

Where do you normally store your PL/SQL constants? On the package-body level? In the specification? I've also seen some people holding constants in a specialized package just for constants. What are best practices in this area?
Thanks.

One downside to having constants in a package body or spec is that when you recompile the package, any user sessions that had the package state in the PGA would get ORA-04068. For this reason, in one large development environment we adopted the convention of having a separate spec-only package to hold the constants (and package globals if any) for each package. We'd then impose a rule saying that these spec-only packages were only allowed to be referenced by their "owning" package - which we enforced at code review. Not a perfect solution, but it worked for us at the time.
For the same reason, I'd never recommend one-constant-package-to-rule-them-all because every time someone needs to introduce a new constant, or modify an existing one, all user sessions get ORA-04068.

In many cases, you want to keep them in the specification so other packages can use them, especially as parameters when calling functions and procedures from your package.
Only when you want to keep them private to the package, you should put them int the body.
Having a package just for constants might be a good idea for those constants that are not related to any piece of code in particular, but relevant for the whole schema.

For our application, all constants are in a table. A simple function is used to extract them. No problem with recompilation, ORA-04068, ...

Best option in my opinion. Store the "constant" in a table and create a generic function to get the values. No 04068 😀

I would prefer the constants to be, by default, in the package body unless you use the constant as a parameter value for one of your public package procedures/functions or as a return value for your functions.
The problem with putting your constants in the package specification is that if you need to change the constant's type, other packages might fail that use the constant because it was just there. If the constant was private in the first place, then you don't need to perform an impact analysis for each change.
If you need to store contants like default language or stuff like that, then I would encapsulate those contants in functions like get_default_language etc. and keep the constants private.

I'd have a concern about having "one package to rule the constants" because package state -- constants, variables, and code -- get cached in the user's PGA at first invocation of any public variable or package. A package constant, if public, should be scoped to the package, and used by only by the methods of the package.
A constant whose scope spans packages should be in a code table with a description, joined in as required. Constant's aren't and variables don't, after all. Having a key-value-pair table of "constants" makes them all public, and makes changing them dynamically possible.

What if we use parameterless function named the same way as constant instead of using constant in the package. In that case we can add new function/constant to the package, change returning value of function or even remove some function from the package freely and we'll not get ORA 04068 after recompile it.
Inside implementation part of the function in the package body we can use constant as a returning value, though it's unnecessarily because we obviously can't change returning value. Also we can use in function signature some special performance technics, such as deterministic or maybe even result cash.
As a positive side effect we get ability to use constant in sql queries.

Related

Why is Golang's http.DefaultClient exported?

I'm curious about why the variable var DefaultClient in Go's http package is exported. As the comment of the variable suggests, it is used internally by methods such as Get(). If that is the case, why does it have to be exported?
When I first started coding HTTP related stuff, I always thought I could just use the DefaultClient to send all my requests, until I found out it is not a function that returns a new Client every time, but more like a static pointer variable that always points to the same empty Client instance... so if I decided to modify its properties, all subsequent calls with the DefaultClient will be affected.
Again, what's purpose of exporting this variable?
I can't speak definitively (as I didn't design the package), but if it's exported, it can be modified. This means you can set the timeout etc (which isn't specified by default).
Convenience methods like http.Get are just a wrapper around DefaultClient.Get, it would make sense to be able to modify the timeout of DefaultClient beforehand, but to perhaps use all the other defaults - such as the ability to reuse connections/transports.

Is there a "correct" way to use exported data internally in an R package?

I know that exported data (access to users) belongs in the data/ folder and that internal data (data used internally by package functions) belongs in R/sysdata.rda. However, what about data I wish to both export to the user AND be available internally for use by package functions?
Currently, presumably due to the order in which objects/data are added to the NAMESPACE, my exported data is not available during devtools::check() and I am getting a NOTE: no visible binding for global variable 'data_x'.
There are probably a half dozen ways to get around this issue, many of which appear to me as rather hacky, so I was wondering if there was a "correct" way to have BOTH external and internal data (and avoid the NOTE from R CMD check).
So far I see these options:
write an internal function that calls the data and use that everywhere internally
Use the ':::' to access the data; which seems odd and invokes a different warning
Have a copy of data_x in BOTH data/ and R/sysdata.rda (super hacky)
Get over it and ignore the NOTE
Any suggestions greatly appreciated,
Thx.

When does an Oracle Package Specification become INVALID

As far as I know a package body can be replaced and recompiled without affecting the specification. A package specification declares procedures and functions, not defines them, so they can not reference objects, that can make the package specification INVALID.
I know that a package specification can reference objects when it uses stand-alone subprograms and other packages to define it's variables. In this case changing referenced objects may cause the specification invalidation.
Is there any other way how an Oracle package specification can depend on (reference) objects and become INVALID whether when referenced objects chаnge or another way?
In specification there can be defined variable or type. If variable is table.column%type package specification can be affected by any ddl operation on the table used for defining variable. The same situation is when in package header we define cursor.
I would also be careful with synonyms swapping both in case of table referenced by variable definition and type used in header.
Next scenario are privileges. If owner of package will loose some grants (lets say due to table recreating) package spec can also go invalid.
I hope what I'm writing make sense.

Hiding Undocumented Functions in a Package - Use of .function_name?

I've got some functions I need to make available in a package, and I don't want to export them or write documentation for them. I'd just hide them inside another function but they need to be available to several functions so doing so becomes a scoping and maintenance issue. What is the right way to do this? By that I mean do they need special names, do they go somewhere other than the R subdirectory, can I put them in a single file, etc? I've checked out the manuals, and what I'm after is like the .internals concept in the core, but I don't any instructions about how to do this generally. I thought I had seen something about this before but cannot locate it just now. Thx.
My solution is to remove unnecessary function from NAMESPACE and call internal function by NAME-OF-PACKAGE:::NAME-OF-INTERNAL-FUNCTION. For example if your package name is RP and the name of the internal function is IFC. Then it would be like RP:::IFC(). Notice that if you use :: (two colon)then you can call functions that are listed in NAMESPACE and when you use ::: (three colon), you can call all functions including internal and exported functions.
After asking on R-help, here is the answer. #Dwin is correct, do not export the internal functions (so fix up your export instructions in NAMESPACE - don't use exportPattern but rather name the functions explicitly using export). You can call them what you want, there is no special naming convention. You do not have to write Rd files for them if you don't export them.

Is it necessary to export base method extensions in an R package? Documentation implications?

In principle, I could keep these extensions not-exported, and this would also allow me to not-add redundant documentation for these already well-documented methods, while still also passing R CMD check myPackage without any reported WARNINGs.
What are some of the drawbacks, if any? Is this possibly recommended to keep extensions of basic methods compartmentalized within the package that defines them? Alternatively, will this make it more difficult for another package to depend on mine, if certain core method-extensions are not exported?
For example, if I don't document and don't export the following:
setMethod("show", "myPackageSpecialClass", function(object){ show(NA) })
I'm trying to flesh-out some of these finer details of best-practices with namespaces and base method extensions.
If you don't export the methods, then users (either at the command line or trying to use your classes and methods in their own package via imports) won't be able to use them -- your class will be displayed with the show,ANY-method.
You are not documenting the generic show, but rather the method appropriate for your class, show,myPackageSpecialClass-method. If in your NAMESPACE you
import(methods)
exportMethods(show)
(note that there is no way to export just some methods on the generic show) and provide no documentation, R CMD check will complain
* checking for missing documentation entries ... WARNING
Undocumented S4 methods:
generic 'show' and siglist 'myPackageSpecialClass'
All user-level objects in a package (including S4 classes and methods)
should have documentation entries.
See the chapter 'Writing R documentation files' in the 'Writing R
Extensions' manual.
Your example (I know it was not meant to be a serious show method :) ) is a good illustration for why methods might be documented -- explaining to the user why every time they try and display the object they get NA, when they were expecting some kind of description about the object.
One approach to documentation is to group methods with the class into a single Rd file, myPackageSpecialClass-class.Rd. This file would contain an alias
\alias{show,myPackageSpecialClass-method}
and a Usage
\S4method{show}{myPackageSpacialClass}(object)
This works so long as no fancy multiple dispatch is used, i.e., it is clear to which class a method applies. If the user asks for help with ?show, they are always pointed toward the methods package help page. For help on your methods / class, they'd need to ask for that specific type of help. There are several ways of doing this but my favorite is
class ? myPackageSpecialClass
method ? "show,myPackageSpecialClass"
This will not be intuitive to the average user; the (class|method) ? ... formulation is not widely used, and the specification of "generic,signature" requires a lot of understanding about how S4 works, including probably a visit to selectMethod(show, "myPackageSpecialClass") (because the method might be implemented on a class that myPackageSpecialClass inherits from) or showMethods(class="myPackageSpecialClass", where=getNamespace("myPackage")) (because you're wondering what you can do with myPackageSpecialClass).

Resources