Currently, I'm developing 2 web-based tools for my own need with hunchentoot.
Before starting hunchentoot, I want to set some special variable with let so there values will be available while hunchentoot is running.
Like :
(let ((*db-path* "my-db-file"))
(start-hunchentoot))
But, once the handlers get invoiced, they don't seam to be in the let anymore and db-path falls back to its global state (which is nil).
At the moment I'm resolving this by writing the let in every handler.
But, I want a more general approach so I will be able to run both applications with different db-path in one run-time.
Is it possible to set db-path in a way so it will be valid for one instance of hunchentoot and not the other?
The used environment is SBCL 1.2.4 on Debian Jessie.
Around method
Adding db-path as a slot in the acceptor might be a suitable option. However, you can also write an around method for handle-request. Assuming *my-acceptor* is globally bound:
(defmethod hunchentoot:handle-request :around ((acceptor (eql *my-acceptor*)) request)
(let ((*db-path* "my-db-file"))
(call-next-method)))
Of course, you don't need to specialize with EQL, you can define your own subclass instead. The advantage of an around method over storing a configuration variable in the class instance is that you keep the benefits of using special variables, i.e. bindings are visible from everywhere in the dynamic scope.
Here is what the documentation string, visible on Lispdoc, says about handle-request (emphasis mine):
This function is called once the request has been read and
a REQUEST object has been created. Its job is to actually handle the
request, i.e. to return something to the client.
Might be a good place
for around methods specialized for your subclass of ACCEPTOR which
bind or rebind special variables which can then be accessed by your
handlers.
I encourage you to read Hunchentoot's documentation.
Special variables and threads
The behavior you observe is because:
The server is running in another thread.
Dynamic bindings are local to each thread, as explained in the manual:
The interaction of special variables with multiple threads is mostly as one would expect, with behaviour very similar to other implementations.
global special values are visible across all threads;
bindings (e.g. using LET) are local to the thread;
threads do not inherit dynamic bindings from the parent thread
If you make your own threads, you can build a closure which binds variables as you want. You can also rely on the portable bordeaux-threads library which takes a list of bindings to be effective inside a thread.
Related
I wish to pass an object using the signal/slot mechanism between threads in Qt. Since I will be passing a pointer to the object, is it safe to call the methods on the object on the receiver's side?
According to this question question the object is not copied (so using original object).
Is this safe? Or am I executing methods on an object belonging to one thread in another thread? Is there a better way to do this?
(I have approximately 20 getters in this class so I don't want to pass individual variables, as well some of the variables are in fact pointers to objects as well)
It is not necessarily safe - signals and slots can be used to cross thread boundaries, so it's possible you could end up trying to access the object from another thread.
The thread in which the slot will be called is determined by the connection type. See the documentation, but as an example:
connect(source, SIGNAL(mySignal(QObject*)), destination, SLOT(mySlot(QObject*)), Qt::DirectConnection);
In this case the function mySlot() will be called from the same thread that the mySignal() signal was emitted in. If your object is not accessed from any threads other than the same thread as the signal emitter this would work fine.
connect(source, SIGNAL(mySignal(QObject*)), destination, SLOT(mySlot(QObject*)), Qt::QueuedConnection);
In this case the function mySlot() will be queued, and called by the event loop of the destination object. So anything done to the object, would happen from within the thread running the event loop of the destination.
I personally find it's best to just stick to passing simple values as arguments. Even though this can work, you would need to add suitable multithreading guards to your QObject if it's likely to be accessed from multiple threads.
First of all, try to use QtConcurrent when you are developing a multi-threaded application. The QtConcurrent namespace provides high-level APIs that make it possible to write multi-threaded programs without using low-level threading primitives such as mutexes, read-write locks, wait conditions, or semaphores.
After that, safety depends on your class members. If all members are thread-safe, then all will be run safely.
(define hypot
(lambda (a b)
(sqrt (+ (* a a) (* b b)))))
This is a Scheme programming language.
"define" creates a variable and global binding
lambda creates a procedure
I would like to know if "define" would be considered as an imperative language feature! As long as I know imperative feature is static scoping. I think it is an imperative feature since "define" create a global binding and static scoped looks at global binding for any variable definition where as in dynamic it looks at the current most active binding.
Please help me find the correct answer!! And I would like to know why or why not?
In a Scheme program (define var expr) statement is both a declaration and an initialization. Declarations introduce a new name into the scope. Declarations and initialization are present in both imperative and declarative languages.
However if the same variable is defined twice, then define behave as an assignment - which belongs to the imperative paradigm.
You've put your finger on a subtle and contentious issue. There have long been two informal camps on how define should work, which I would label (very imperfectly, and very controversially!) as the static vs. dynamic camps.
The static camp sees define as a non-side-effecting top-level declaration—it's a syntax that simply defines a name in a top-level scope, just like let is a syntax that defines a name in a local scope. A bit more precisely, this camp tends to see the top-level environment as equivalent to a big letrec with all the defines as the bindings, and all "loose" top-level expressions as the body. This is, incidentally, similar to the way that simple compilers work—read the whole program from one or more files, figure out all of the top-level bindings and generate code with knowledge of the whole program's source text.
The dynamic camp, on the other hand, tends to conceive of the top-level environment as a mutable data structure to which bindings can be added at runtime, and define is then an operation that modifies the top-level environment. This is, incidentally, similar to how simple interactive interpreters work—read definitions interactively from input, one at a time, and incorporate them into the environment as the user provides them.
To give one example, the SLIB library is one that I recall has been criticized for being much too firmly in the "dynamic" camp. If you read Section 1.1 on "features", you see this right from the beginning:
SLIB maintains a list of features supported by a Scheme session. The set of features provided by a session may change during that session.
The documentation for the require form that you use in SLIB to "load" modules continues with this:
Procedure: require feature
If (provided? feature) is true, then require just returns.
Otherwise, if feature is found in the catalog, then the corresponding files will be loaded and (provided? feature) will henceforth return #t. That feature is thereafter provided.
Otherwise (feature not found in the catalog), an error is signaled.
If you read this carefully, you will be struck that it's framing the whole thing as modules being "loaded" at runtime—and not as compile-time linking, which is foreign to the design.
So a "session" is a set of bindings whose keys—not just their values—changes during the runtime of the program. Programs are able to mutate the session with provide and require. They are able to directly observe the mutation with provided?. And it is implied that they can indirectly observe the set of identifiers bound in top-level environment change as a result of require—a call to require causes procedure invocations that would result in a runtime error before its invocation to no longer be so afterwards.
So we can't help but conclude that going by the philosophy of the people who designed this library, define is imperative. But not every Scheme user or implementer shares this philosophy.
First off Scheme is lexically scoped. Define usually is not limited to top level bindings like it is in Racket. It can create bindings within other procedure bodies.
In some implementations define can manipulate state but only for top level definitions. Otherwise it acts like let and binds a variable to the local scope. To actually take advantage of the top-level rebinding programatically is difficult.
So define doesn't introduce an imperative style into scheme code. Compare define to set! and its relatives, which by modify the variable in whatever environment it is bound, thereby allowing imperative style in scheme code.
When using the Extended Program Check, I get the following warning:
Do not declare fields and field symbols (variable name) globally.
This is from declaring global data before the selection screen. The obvious solution is that they should be declared locally in a subroutine.
If I decide to do this, the data will now be out of scope for the other subroutines, so I would end up creating something to the effect of a main() function from C or Java. This sounds like a good idea - however, events such as INITIALIZATION are not allowed to be inside of subroutines, meaning that it forces a break in scope.
Observe the sample program below:
REPORT Z_EXAMPLE.
SELECTION-SCREEN BEGIN OF BLOCK upload WITH FRAME TITLE text-H01.
PARAMETERS: p_infile TYPE rlgrap-filename LOWER CASE OBLIGATORY.
SELECTION-SCREEN END OF BLOCK upload.
AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_infile.
PERFORM main1 CHANGING p_infile.
INITIALIZATION.
PERFORM main2.
TOP-OF-PAGE.
PERFORM main3.
...
main1, main2, and main3 cannot to my knowledge pass any data to one another without global declaration. If the data is parsed from the uploaded file p_infile in main1, it cannot be accessed in main2 or main3. Aside from omitting events all together, is there any way to abide by the warning but let data be passed over events?
There are a variety of techniques - I prefer to code almost everything except for the basic selection screen handling in a separate controller class. The report simply defers to that class and calls its methods. Other than that - it's just a warning that you can ignore if you know what you're doing. Writing a program without any global variable at all will certainly not be practical - however, you should think at least twice before using global variables or attributes in a place where a method parameter would be more appropriate.
As #vwegert so rightly said, it's almost impossible to write an ABAP program that doesn't have at least a few global variables (the selection screen and events enforce that, unfortunately).
One approach is to use a controller class, another is to have a main subroutine and have it call other subroutines as required, passing values as required. I tend to favour the latter approach in a lot of cases, if only because it's easier to split the subroutines into logical groupings in separate includes (doing so with classes can sometimes be a little ugly). It really is a matter of approach though, but the key thing is reducing global variables to a minimum - unfortunately too few ABAP developers that I've encountered care about such issues.
Update
#Christian has reminded me that as of ABAP AS 7.02, subroutines are considered obsolete:
Subroutines should no longer be created in new programs for the following reasons:
The parameter interface has clear weaknesses when compared with the parameter interface of methods, such as:
positional parameters instead of keyword parameters
no genuine input parameters in pass by reference
typing is optional
no optional parameters
Every subroutine implicitly belongs to the public interface of its program. Generally this is not desirable.
Calling subroutines externally is critical with regard to the assignment of the container program to a program group in the internal session. This assignment cannot generally be defined as static.
Those are all valid points and I think in light of that, using classes for modularisation is definitely the preferred approach (and from a purely aesthetic point of view, they also "fit" better with the syntax enhancements in 7.02 and later).
I am not familar with WINAPI, and I am looking for a way to replace WaitForMultipleObjects used in one example I'm porting to Qt by anything using Qt only. Is it possible?
EDIT: (Providing more information as requested in comments)
A 3rd party API provides an array of events:
HANDLE m_hEv[MAX_EV];
In an endles-loop of a thread, the program waits for the events like this:
WaitForMultipleObjects(m_EvMax, m_hEv, FALSE ,INFINITE )
The HANDLE type seems to be void*.
So I wonder, if any Qt class could observe m_hEv for changes and unlock thread execution.
There is no simple way of porting WaitForMultipleObjects outside WinAPI. WinAPI has an "advantage" of that all lockable resources (sockets, files, processes) provide the same generic non-typesafe HANDLE, which is your void*. Unlike other platforms which have different ways of locking and signalling per the type of resource, the event handling in WinAPI is largely independent of the resources. Then a generic function like WaitForMultipleObjects can exist, which doesn't need to care who produced the HANDLEs. So you'll have to understand what the code is trying to do and mimic it differently per scenario.
The biggest difference is in WaitForMultipleObjects third parameter, which is FALSE in your case. Which means that the it will exit waiting as soon as any single event of the waiting array will happen. That is the easier scenario and can be replaced with a QWaitCondition.
Instead of m_hEv, you will pass a QWaitCondition* into the code which signals the event (most probably via WinAPI SetEvent(m_hEv[x]))
Instead of WaitForMultipleObjects, do QWaitCondition::wait().
Instead of SetEvent(), do QWaitCondition::wakeOne().
Would the third parameter be TRUE, then the WinAPI code waits until ALL m_hEv events are signalled. The established name for such functionality is a synchronization barrier and it can be simulated with QEventCondition too, but does not come out of the Qt box. I never needed to do any myself, but SO has some ideas how to do it:
Qt synchronization barrier?
WaitForMultipleObjects is a kind of generic function that works with many things: threads, processes, mutexes, etc. Qt is an OOP library where every class exposes the operations it supports. So the equivalent operation in Qt depends on what class you're using. For example, with threads, use QThread::wait. With mutexes, use QMutex::lock.
I've consolidated many of the useful answers and came up with my own answer below
For example, I am writing a an API Foo which needs explicit initialization and termination. (Should be language agnostic but I'm using C++ here)
class Foo
{
public:
static void InitLibrary(int someMagicInputRequiredAtRuntime);
static void TermLibrary(int someOtherInput);
};
Apparently, our library doesn't care about multi-threading, reentrancy or whatnot. Let's suppose our Init function should only be called once, calling it again with any other input would wreak havoc.
What's the best way to communicate this to my caller? I can think of two ways:
Inside InitLibrary, I assert some static variable which will blame my caller for init'ing twice.
Inside InitLibrary, I check some static variable and silently aborts if my lib has already been initialized.
Method #1 obviously is explicit, while method #2 makes it more user friendly. I am thinking that method #2 probably has the disadvantage that my caller wouldn't be aware of the fact that InitLibrary shouln't be called twice.
What would be the pros/cons of each approach? Is there a cleverer way to subvert all these?
Edit
I know that the example here is very contrived. As #daemon pointed out, I should initialized myself and not bother the caller. Practically however, there are places where I need more information to properly initialize myself (note the use of my variable name someMagicInputRequiredAtRuntime). This is not restricted to initialization/termination but other instances where the dilemma exists whether I should choose to be quote-and-quote "fault tolorent" or fail lousily.
I would definitely go for approach 1, along with an easy-to-understand exception and good documentation that explains why this fails. This will force the caller to be aware that this can happen, and the calling class can easily wrap the call in a try-catch statement if needed.
Failing silently, on the other hand, will lead your users to believe that the second call was successful (no error message, no exception) and thus they will expect that the new values are set. So when they try to do something else with Foo, they don't get the expected results. And it's darn near impossible to figure out why if they don't have access to your source code.
Serenity Prayer (modified for interfaces)
SA, grant me the assertions
to accept the things devs cannot change
the code to except the things they can,
and the conditionals to detect the difference
If the fault is in the environment, then you should try and make your code deal with it. If it is something that the developer can prevent by fixing their code, it should generate an exception.
A good approach would be to have a factory that creates an intialized library object (this would require you to wrap your library in a class). Multiple create-calls to the factory would create different objects. This way, the initialize-method would then not be a part of the public interface of the library, and the factory would manage initialization.
If there can be only one instance of the library active, make the factory check for existing instances. This would effectively make your library-object a singleton.
I would suggest that you should flag an exception if your routine cannot achieve the expected post-condition. If someone calls your init routine twice, and the system state after calling it the second time will be the same would be the same as if it had just been called once, then it is probably not necessary to throw an exception. If the system state after the second call would not match the caller's expectation, then an exception should be thrown.
In general, I think it's more helpful to think in terms of state than in terms of action. To use an analogy, an attempt to open as "write new" a file that is already open should either fail or result in a close-erase-reopen. It should not simply perform a no-op, since the program will be expecting to be writing into an empty file whose creation time matches the current time. On the other hand, trying to close a file that's already closed should generally not be considered an error, because the desire is that the file be closed.
BTW, it's often helpful to have available a "Try" version of a method that might throw an exception. It would be nice, for example, to have a Control.TryBeginInvoke available for things like update routines (if a thread-safe control property changes, the property handler would like the control to be updated if it still exists, but won't really mind if the control gets disposed; it's a little irksome not being able to avoid a first-chance exception if a control gets closed when its property is being updated).
Have a private static counter variable in your class. If it is 0 then do the logic in Init and increment the counter, If it is more than 0 then simply increment the counter. In Term do the opposite, decrement until it is 0 then do the logic.
Another way is to use a Singleton pattern, here is a sample in C++.
I guess one way to subvert this dilemma is to fulfill both camps. Ruby has the -w warning switch, it is custom for gcc users to -Wall or even -Weffc++ and Perl has taint mode. By default, these "just work," but the more careful programmer can turn on these strict settings themselves.
One example against the "always complain the slightest error" approach is HTML. Imagine how frustrated the world would be if all browsers would bark at any CSS hacks (such as drawing elements at negative coordinates).
After considering many excellent answers, I've come to this conclusion for myself: When someone sits down, my API should ideally "just work." Of course, for anyone to be involved in any domain, he needs to work at one or two level of abstractions lower than the problem he is trying to solve, which means my user must learn about my internals sooner or later. If he uses my API for long enough, he will begin to stretch the limits and too much efforts to "hide" or "encapsulate" the inner workings will only become nuisance.
I guess fault tolerance is most of the time a good thing, it's just that it's difficult to get right when the API user is stretching corner cases. I could say the best of both worlds is to provide some kind of "strict mode" so that when things don't "just work," the user can easily dissect the problem.
Of course, doing this is a lot of extra work, so I may be just talking ideals here. Practically it all comes down to the specific case and the programmer's decision.
If your language doesn't allow this error to surface statically, chances are good the error will surface only at runtime. Depending on the use of your library, this means the error won't surface until much later in development. Possibly only when shipped (again, depends on alot).
If there's no danger in silently eating an error (which isn't a real error anyway, since you catch it before anything dangerous happens), then I'd say you should silently eat it. This makes it more user friendly.
If however someMagicInputRequiredAtRuntime varies from calling to calling, I'd raise the error whenever possible, or presumably the library will not function as expected ("I init'ed the lib with value 42, but it's behaving as if I initted with 11!?").
If this Library is a static class, (a library type with no state), why not put the call to Init in the type initializer? If it is an instantiatable type, then put the call in the constructor, or in the factory method that handles instantiation.
Don;t allow public access to the Init function at all.
I think your interface is a bit too technical. No programmer want to learn what concept you have used while designing the API. Programmers want solutions for their actual problems and don't want to learn how to use an API. Nobody wants to init your API, that is something that the API should handle in the background as far as possible. Find a good abstraction that shields the developer from as much low-level technical stuff as possible. That implies, that the API should be fault tolerant.