Will an implicit cursor ever fail to close in PL/SQL? - plsql

With PL/SQL will there ever be a situation, say in the case of an exception, where an implicit cursor will fail to close?
I understand that an implicit cursor is supposed to close itself after use, but just want to know if there's ever a situation where this might not be the case, and if this is possible, what kind of remediation will be a good idea.

When COMMIT or ROLLBACK fails the cursor will not close automatically
It is recommended to use route bellow when using cursors:
-- before using the cursor
IF your_cursor %ISOPEN THEN
CLOSE your_cursor;
END IF;
OPEN your_cursor;
-- after using the cursor
CLOSE your_cursor;
-- when exception
IF your_cursor %ISOPEN THEN
CLOSE your_cursor;
END IF;

I'm going to assume that you're talking about implicit cursors, i.e. a SELECT INTO... or a DML statement executed within a PL/SQL block as opposed to a FOR loop.
As with explicit cursors implicit cursors have attributes; you can use SQL%NOTFOUND (instead of CURSOR_NAME%NOTFOUND for instance.
To quote from the 11.1 documentation on implicit cursor attributes for SQL%ISOPEN:
Always returns FALSE, because the database closes the SQL cursor
automatically after executing its associated SQL statement.
This should, I believe, be taken to mean that the cursors will be closed after execution, whether an exception is raised or not. After all, an execution halted due to an exception is still a executed SQL statement.
The reason has been removed from the 11.2 documentation.
SQL%ISOPEN always has the value FALSE.
It appears to have been added to the chapter on implicit cursors instead:
SQL%ISOPEN always returns FALSE, because an implicit cursor always
closes after its associated statement runs.
Whether the cursor closes or not doesn't really matter due to the answer to your final question, "what kind of remediation will be a good idea.". To quote from the same chapter:
You cannot control an implicit cursor, but you can get information
from its attributes.
So, no. There's no remediation possible; it is impossible to explicitly close an implicitly opened cursor.
The one thing you might want to test is whether you can raise ORA-0100: Maximum open cursors exceeded using solely implicit cursors as this is the worst consequence I can think of.

Let's define an "implicit cursor" as a SELECT statement executed in a FOR loop.
Looking at it from a practical point of view - regardless of whether or not it's possible for an implicit cursor to be left open, the important question is "What can you do about it?". To the best of my knowledge, the answer to that question is "Nothing". You don't have a cursor variable to work with, there's no way (that I know of) to access the implicit cursor, and thus you really can't affect it.
There's two things you can do. The first is to completely avoid the use of implicit cursors. Only use explicit cursors, go through all the steps of opening, fetching, closing, etc. This gives you the maximum level of control. If you're into this sort of thing, go for it.
On the other hand, you can use implicit cursors and just not worry. I'm good with not worrying. :-) Seriously, though, implicit cursors are IMO hugely better than explicit cursors. There's less code to write, and therefore less code to screw up. The system can, in certain circumstances, optimize the use of implicit cursors without requiring you to write a pile of extra code. The code you write is cleaner and more easily understood - not a concern if you're working in a one man shop where you write and maintain all the code, but out here in Corporateville we often have to write code others will maintain and vice versa, and it's considered polite to hand over the cleanest, clearest code we possibly can. There are times (such as when handing a cursor variable back to a caller from outside of Oracle) when explicit cursors are necessary, but for most code I'm using implicit FOR-LOOP cursors - and LOVING it! (And extra bonus points for those who can recall where that comes from :-)
Share and enjoy.

Related

Why we need to compile the program of progress 4GL?

I would like to know Why we need to compile the program of progress 4GL? Really what is happening behind there? Why we are getting .r file after compiled the program? When we check the syntax if its correct then we will get one message box 'Syntax is correct' how its finding the errors and showing the messages.Any explanations welcome and appreciated.
Benefits of compiled r-code include:
Syntax checking
Faster execution (r-code executes faster)
Security (r-code is not "human readable" and tampering with it will likely be noticed)
Licensing (r-code runtime licenses are much less expensive)
For "how its finding the errors and showing the messages" -- at a high level it is like any compiler. It evaluates the provided source against a syntax tree and lets you know when you violate the rules. Compiler design and construction is a fairly advanced topic that probably isn't going to fit into a simple SO question -- but if you had something more specific that could stand on its own as a question someone might be able to help.
The short answer is that when you compile, you're translating your program to a language the machine understands. You're asking two different questions here, so let me give you a simple answer to the first: you don't NEED to compile if you're the only one using the program, for example. But in order to have your program optimized (since it's already at the machine language level) and guarantee no one is messing with your logic, we compile the code and usually don't allow regular users to access the source code.
The second question, how does the syntax checker work, I believe it would be better for you to Google and choose some articles to read about compilers. They're complex, but in a nutshell what they do is take what Progress expects as full, operational commands, and compare to what you do. For example, if you do a
Find first customer where customer.active = yes no-error.
Progress will check if customer is a table, if customer.active is a field in that table, if it's the logical type, since you are filtering if it is yes, and if your whole conditions can be translated to one single true or false Boolean value. It goes on to check if you specified a lock (and default to shared if you haven't, like in my example, which is a no-no, by the way), what happens if there are multiple records (since I said first, then get just the first one) and finally what happens if it fails. If you check the find statement, there are more options to customize it, and the compiler will simply compare your use of the statement to what Progress can have for it. And collect all errors if it can't. That's why sometimes compilers will give you generic messages. Since they don't know what you're trying to do, all they can do is tell you what's basically wrong with what you wrote.
Hope this helps you understand.

Implementing nullable references for manual memory management

The, uh, "legacy" BlitzPlus programming language has an interesting feature designed to make manual memory management "safe" for newbie programmers, compared to the dangling pointer problems they might potentially encounter in a language that expects them to manage raw pointers like C. When an object is deleted, all references to that object become references to null:
Local a.Foo = New Foo
Local b.Foo = a
Delete a
; at this point, b = null
How might one go about implementing such a system?
The original implementation uses hidden automatic reference counting. Delete doesn't actually free an object, it just sets an "identity" field to null - so in the above example, the variable b still points to the same object it did before but that object has been tagged as equal to null for the purposes of comparison. The memory itself is not released until the object's hidden reference count reaches zero. (If this strikes you as an odd decision, it probably was: the language's successor ditched explicit Delete, and just used the reference counting system and called it a GC.)
There are a few things about this design that strike me as a bit off:
Conventional wisdom holds that refcounting is slow. It also wastes a whole word of memory (the horror!).
As far as I can see, refcounting is incompatible, or only poorly compatible, with multithreading (I think the logic of the developer was that "multithreading will never catch on").
The manual Delete operator doesn't actually manually manage memory anyway! (Although it arguably provides slightly more control than leaving it entirely to the refcounter, since it can break cycles and eagerly decrement the counts of owned objects.)
Anyway BlitzPlus is now open-source, and as a result I want to try my hand at implementing it since that's allowed (for the fun of the challenge). If this were a brand new language design, the obvious answer would be "make it garbage collected", but it isn't: the existing language has Delete so an implementation has to work with that.
Is there any obvious way to implement this that doesn't have the drawbacks and/or smells of the above, i.e. perhaps without refcounts at all? (I mean I could have a full tracing GC in the background, but that seems silly. Even in the context of implementing a dead language.) Is the original strategy actually as bad as it looks? Is there a way to get "true" manual management - i.e. free-on-Delete - while still nulling all references?

What is exact reason behind the execution of finally block in asp.net?i know its by design, in finally block i should do resource cleanup

i know its by design, in finally block i should do resource cleanup - that's why the finally block is always executed no matter what the exception handling code is.
But "WHY" it will execute is my question?, this was asked to my friend in an interview, so even i got confused after discussing with him, please clarify, thanks in advance.?
"WHY" here could be summarised as "because that is what the specification says; that is why it was designed, specified, implemented, tested, and supported: because they wanted something that would always execute, no matter what the exception handling code is". It is a bit like asking "WHY does execution flow to the else block (if one) if the condition in an if test fails?"
The uses of finally include:
resource cleanup (Dispose() being an important one, but not the only one)
logging / tracing / profiling the fact that we finished (whether successful or not)
making the state consistent again (for example, resetting an isRunning flag)
Anecdotally, I make much more use of finally than I do catch. It is pretty common that I want something to happen while leaving, but often with exceptions the best thing to do is to let them bubble upwards. The only thing I usually need to be sure to do during an exception is to clean up any mess I made - which I need to do either way - so I might as well do that using a combination of finally and using (which is really just a wrapper around finally anyway).
It's almost always around resource cleanup - or sometimes logical cleanup of getting back to a reasonable state.
If I open a file handle or a database connection (or whatever) then when I leave that piece of code I want the handle to be closed regardless of how I leave it - whether it's "normally" or via an exception.
finally blocks are simply there to give that "Execute this no matter what1" behaviour which can often be useful.
1 Well, within reason. Not if the process dies abruptly, for example - of if the power cable is kicked out.

Why does zumero_sync need to be called multiple times?

According to the documentation for zumero_sync:
If a large amount of information needs to be pulled from the server,
this function may need to be called more than once.
In my Android app that uses Zumero that's no problem; I just keep calling zumero_sync until the return value doesn't start with "0;".
However, now I'm trying to write an admin script that also syncs with my server dbfiles. I'd like to use the sqlite3 shell, and have the script pass the SQL to execute via command line arguments. I need to call zumero_sync in a loop (which SQLite doesn't support) to make sure the db is fully synced. If I had to, I could invoke sqlite3 in a loop (reading its output, looking for "0;"), or even write a C++ app to call the SQLite/Zumero functions natively. But it certainly would be easier if a single zumero_sync was enough.
I guess my real question is: could zumero_sync be changed so it completes the sync before returning? If there are cases where the existing behavior is more useful, maybe there could be a parameter for specifying which mode to use?
I see two basic questions here:
(1) Why does zumero_sync() work the way it does?
(2) Can it work differently?
I'll answer (2) first, since it's easier: Yes, it could work differently. Rather, we could (and probably will, soon, you brought this up) implement an additional function, named something like zumero_sync_complete(), which performs [the guts of] zumero_sync() in a loop and returns after the sync is complete.
We didn't implement zumero_sync_complete() because it doesn't add much value. It's a simple loop, so you can darn well write it yourself. :-)
Er, except in scripting environments which don't support loops. Like the sqlite3 shell.
Answer to (1):
The Zumero sync protocol is designed to give the server the flexibility to return partial results if it wants to do so. And for the sake of reducing load on the server (and increasing its scalability) it often does want to do exactly that.
Given that, one reason to expose this to the client is to increase the client's flexibility as well. As long we're making multiple roundtrips, we might as well give the client an opportunity to do something (like, maybe, update a progress bar) in between them.
Another thing a client might want to do in between loop iterations is handle an error.
Or, in the case of a multithreaded client, it might want to deal with changes that happened on the client while the sync is going on.
Which raises the question of how locking should be managed? Do we hold the sqlite write lock during the entire loop? Or only when absolutely necessary?
Bottom line: A robust app would probably want to implement the loop itself so that it can make its own decisions and retain full control over things.
But, as you observe, the sqlite3 shell doesn't have loops. And it's not an app. And it doesn't have threads. Or progress bars. So it's a use case where a simpler-and-less-powerful form of zumero_sync() would make sense.

API design: is "fault tolerance" a good thing?

I've consolidated many of the useful answers and came up with my own answer below
For example, I am writing a an API Foo which needs explicit initialization and termination. (Should be language agnostic but I'm using C++ here)
class Foo
{
public:
static void InitLibrary(int someMagicInputRequiredAtRuntime);
static void TermLibrary(int someOtherInput);
};
Apparently, our library doesn't care about multi-threading, reentrancy or whatnot. Let's suppose our Init function should only be called once, calling it again with any other input would wreak havoc.
What's the best way to communicate this to my caller? I can think of two ways:
Inside InitLibrary, I assert some static variable which will blame my caller for init'ing twice.
Inside InitLibrary, I check some static variable and silently aborts if my lib has already been initialized.
Method #1 obviously is explicit, while method #2 makes it more user friendly. I am thinking that method #2 probably has the disadvantage that my caller wouldn't be aware of the fact that InitLibrary shouln't be called twice.
What would be the pros/cons of each approach? Is there a cleverer way to subvert all these?
Edit
I know that the example here is very contrived. As #daemon pointed out, I should initialized myself and not bother the caller. Practically however, there are places where I need more information to properly initialize myself (note the use of my variable name someMagicInputRequiredAtRuntime). This is not restricted to initialization/termination but other instances where the dilemma exists whether I should choose to be quote-and-quote "fault tolorent" or fail lousily.
I would definitely go for approach 1, along with an easy-to-understand exception and good documentation that explains why this fails. This will force the caller to be aware that this can happen, and the calling class can easily wrap the call in a try-catch statement if needed.
Failing silently, on the other hand, will lead your users to believe that the second call was successful (no error message, no exception) and thus they will expect that the new values are set. So when they try to do something else with Foo, they don't get the expected results. And it's darn near impossible to figure out why if they don't have access to your source code.
Serenity Prayer (modified for interfaces)
SA, grant me the assertions
to accept the things devs cannot change
the code to except the things they can,
and the conditionals to detect the difference
If the fault is in the environment, then you should try and make your code deal with it. If it is something that the developer can prevent by fixing their code, it should generate an exception.
A good approach would be to have a factory that creates an intialized library object (this would require you to wrap your library in a class). Multiple create-calls to the factory would create different objects. This way, the initialize-method would then not be a part of the public interface of the library, and the factory would manage initialization.
If there can be only one instance of the library active, make the factory check for existing instances. This would effectively make your library-object a singleton.
I would suggest that you should flag an exception if your routine cannot achieve the expected post-condition. If someone calls your init routine twice, and the system state after calling it the second time will be the same would be the same as if it had just been called once, then it is probably not necessary to throw an exception. If the system state after the second call would not match the caller's expectation, then an exception should be thrown.
In general, I think it's more helpful to think in terms of state than in terms of action. To use an analogy, an attempt to open as "write new" a file that is already open should either fail or result in a close-erase-reopen. It should not simply perform a no-op, since the program will be expecting to be writing into an empty file whose creation time matches the current time. On the other hand, trying to close a file that's already closed should generally not be considered an error, because the desire is that the file be closed.
BTW, it's often helpful to have available a "Try" version of a method that might throw an exception. It would be nice, for example, to have a Control.TryBeginInvoke available for things like update routines (if a thread-safe control property changes, the property handler would like the control to be updated if it still exists, but won't really mind if the control gets disposed; it's a little irksome not being able to avoid a first-chance exception if a control gets closed when its property is being updated).
Have a private static counter variable in your class. If it is 0 then do the logic in Init and increment the counter, If it is more than 0 then simply increment the counter. In Term do the opposite, decrement until it is 0 then do the logic.
Another way is to use a Singleton pattern, here is a sample in C++.
I guess one way to subvert this dilemma is to fulfill both camps. Ruby has the -w warning switch, it is custom for gcc users to -Wall or even -Weffc++ and Perl has taint mode. By default, these "just work," but the more careful programmer can turn on these strict settings themselves.
One example against the "always complain the slightest error" approach is HTML. Imagine how frustrated the world would be if all browsers would bark at any CSS hacks (such as drawing elements at negative coordinates).
After considering many excellent answers, I've come to this conclusion for myself: When someone sits down, my API should ideally "just work." Of course, for anyone to be involved in any domain, he needs to work at one or two level of abstractions lower than the problem he is trying to solve, which means my user must learn about my internals sooner or later. If he uses my API for long enough, he will begin to stretch the limits and too much efforts to "hide" or "encapsulate" the inner workings will only become nuisance.
I guess fault tolerance is most of the time a good thing, it's just that it's difficult to get right when the API user is stretching corner cases. I could say the best of both worlds is to provide some kind of "strict mode" so that when things don't "just work," the user can easily dissect the problem.
Of course, doing this is a lot of extra work, so I may be just talking ideals here. Practically it all comes down to the specific case and the programmer's decision.
If your language doesn't allow this error to surface statically, chances are good the error will surface only at runtime. Depending on the use of your library, this means the error won't surface until much later in development. Possibly only when shipped (again, depends on alot).
If there's no danger in silently eating an error (which isn't a real error anyway, since you catch it before anything dangerous happens), then I'd say you should silently eat it. This makes it more user friendly.
If however someMagicInputRequiredAtRuntime varies from calling to calling, I'd raise the error whenever possible, or presumably the library will not function as expected ("I init'ed the lib with value 42, but it's behaving as if I initted with 11!?").
If this Library is a static class, (a library type with no state), why not put the call to Init in the type initializer? If it is an instantiatable type, then put the call in the constructor, or in the factory method that handles instantiation.
Don;t allow public access to the Init function at all.
I think your interface is a bit too technical. No programmer want to learn what concept you have used while designing the API. Programmers want solutions for their actual problems and don't want to learn how to use an API. Nobody wants to init your API, that is something that the API should handle in the background as far as possible. Find a good abstraction that shields the developer from as much low-level technical stuff as possible. That implies, that the API should be fault tolerant.

Resources