I need the way to store some data globally in Clojure. But I can't find the way to do that. I need to load some data in runtime and put it to a global pool of objects, to manipulate with it later. This pool should be accessed inside some set of functions to set/get data from it like a some sort of small in-memory database with hash-like syntax to access.
I know that it might be bad pattern in functional programming, but I don't know other way to store dynamic set of objects to access/modify/replace it in runtime. java.util.HashMap is some sort of solution, but it couldn't be accessed with sequence functions and I miss flexibility of Clojure when I need to use this kind of collection. Lisps syntax are a great, but it's a bit stucks on purity even if developer doesn't need it in some places.
This is the way I want to work with it:
; Defined somewhere, in "engine.templates" namespace for example
(def collection (mutable-hash))
; Way to access it
(set! collection :template-1-id (slurp "/templates/template-1.tpl"))
(set! collection :template-2-id "template string")
; Use it somewhere
(defn render-template [template-id data]
(if (nil? (get collection template-id)) "" (do-something)))
; Work with it like with other collection
(defn find-template-by-type [type]
(take-while #(= type (:type %)) collection)]
Have someone a way I can use for tasks like this? Thank you
Have a look at atoms.
Your example could be adapted to something like this (untested):
; Defined somewhere, in "engine.templates" namespace for example
(def collection (atom {}))
; Way to access it
(swap! collection assoc :template-1-id (slurp "/templates/template-1.tpl"))
(swap! collection assoc :template-2-id "template string")
; Use it somewhere
(defn render-template [template-id data]
(if (nil? (get #collection template-id)) "" (do-something)))
; Work with it like with other collection
(defn find-template-by-type [type]
(take-while #(= type (:type %)) #collection)]
swap! is how you can updated the value of an atom in a thread-safe manner. Additionally note that references to collection above have been prepended by the # sign. That is how you get the value contained in an atom. The # sign is short for (deref collection).
Related
I read a lot about generic functions in CL. I get it. And I get why they are valuable.
Mainly, I use them for when I want to execute a similar action with different data types, like this:
(defgeneric build-url (account-key)
(:documentation "Create hunter api urls"))
(defmethod build-url ((key number))
"Build lead api url"
(do-something...))
(defmethod build-url ((key string))
"build campaign api url"
(do-somthing... ))
In this example, campaign-url and lead-url are structures (defstruct).
My question is, at a high level, how do classes add value to the way generic functions + structures work together?
Structures historically predate classes, are more restricted and more "static" than classes: once a structure is defined, the compiler can generate code that accesses its slots efficiently, can assume their layout is fixed, etc. There is a lot of inlining or macro-expansion done that makes it necessary to rebuild everything from scratch when the structure changes. Being able to redefine a struct at runtime is not something defined by the standard, it is merely implementations trying to be nice.
On the other hand, classes have more features and are easier to manipulate at runtime. Suppose you write this class:
(defclass person ()
((name :initarg :name :reader .name)))
And you instantiate it:
(defparameter *someone* (make-instance 'person :name "Julia O'Caml"))
It is possible now to update the class definition:
(defparameter *id-counter* 0)
(defun generate-id ()
(incf *id-counter*))
(defclass person ()
((name :initarg :name :reader .name)
(dob :initarg :date-of-birth :reader .date-of-birth)
(%id :reader .id :initform (generate-id))))
And now, *someone*, which existed already, has two additional fields, dob that is unbound, and %id that is automatically initialized to 1. There is a whole section about Object Creation and Initialization (7.1) that defines how objects can be redefined, change class, etc.
Moreover, this mechanism is not fixed, a lot of the steps described above rely on generic functions. It is possible to define how an object is allocated, initialized, etc. The concept was standardized as what is known as the Meta-Object Protocol, which also introduces the concept of metaobject, the object representing a class: usually a class has a name, parent classes, slots, etc. but you can add new members to a class, or change how instance slots are organized (maybe your just need a global handle and a connection, and the actual instance slots are stored in another process?).
Note also that once CLOS/MOP was defined, it was also eventually possible to define structures in this framework: in the standard , defstruct (without a :type option) defines classes with a structure-class metaclass. Still, they do not behave like standard-class because as said above they are more restricted, and as such are subject to more aggressive compilation optimizations (in general).
Structures are nice if you need to program like in C and you are ok with recompiling all your code when the structure changes. It is however premature optimization to use them in all cases. It is possible to use a lot of standard objects without noticing much slowness nowadays (a bit like Python).
I am currently starting to set up stumpwm, and I would like to assign a specific window to a particular group.
So far I have this:
(define-frame-preference "Spotify"
(0 t t :class "Spotify")
)
So essentially, I would expect that that would set the windows with the class Spotify to the group Spotify, this however does not happen.
Can anybody help me on this?
Thank you!
The relationship between X11 windows and Linux processes is thin: things are asynchronous, you start a process and some time later zero, one or more windows are created.
You have to work with callbacks, there is no easy way to create a process and synchronously have all its windows in return.
Some processes are nice enough to set the _NET_WM_PID property on windows (it looks like the "Spotify" application does it). You can retrieve this property as follows:
(first (xlib:get-property (window-xwin w) :_net_wm_pid))
Placement rules cannot help here, given how Spotify fails to set the class property early enough (see comments and other answer). But you can use a custom hook:
STUMPWM-USER> (let ((out *standard-output*))
(push (lambda (&rest args) (print args out))
*new-window-hook*))
(#<CLOSURE (LAMBDA (&REST ARGS)) {101A92388B}>)
Notice how I first evaluate *standard-output* to bind it lexically to out, so that the function can use it as a stream when printing informations. This is because the hook might be run in another thread, where the dynamic binding of the standard output might not be the one I want here (this ensures debugging in done in the Slime REPL, in my case).
When I start for example xclock, the following is printed in the REPL:
(#S(TILE-WINDOW "xclock" #x380000A))
So I can change the hook so that instead if does other things. This is a bit experimental but for example, you can temporarily modify the *new-window-hook* to react on a particular window event:
(in-package :stumpwm-user)
(let ((process (sb-ext:run-program "xclock" () :search t :wait nil))
(hook))
(sb-ext:process-kill process sb-unix:sigstop)
(flet ((hook (w)
(when (find
(sb-ext:process-pid process)
(xlib:get-property (window-xwin w) :_net_wm_pid))
(move-window-to-group w (add-group (current-screen) "XCLOCK"))
(setf *new-window-hook* (remove hook *new-window-hook*)))))
(setf hook #'hook)
(push #'hook *new-window-hook*))
(sb-ext:process-kill process sb-unix:sigcont))
Basically: create a process, stop it to minimize race conditions, define a hook that checks if the PID associated in the client matches the one of the process, execute some rules, then remove the hook from the list of hooks. This is fragile, since if the hook is never run, it stays in the list, and in case of errors, it also stays in the list. At the end of the expression, the hook is added and the process resumes execution.
So it seems like, as pointed out by coredump, the are issues in the way the Spotify window is defined.
As an alternative, there are fortunately plenty of ways to control spotify via Third Party Clients (ArchWiki)
Personally, I found that you can control spotify via Ivy on Emacs thanks to this project
Ivy Spotify and this will probably be what I will use.
a go block returns a channel and not the return value, so how can one extract the return value in a go block, when cljs doesn't have <!!?
For example, given the following code:
(go (let [response (<! (http/get "https://api.github.com/users"
{:with-credentials? false
:query-params {"since" 135}}))]
(:status response)))
will return a channel but not (:status response). How to make this go block return a (:status response)?
<!! doesn't exist in javascript because the runtime does not support it. Javascript is single-threaded and <!! is a blocking operation. Blocking the main thread in a browser-based environment is a bad idea as it would simply freeze all javascript actions (and potentially freeze the whole page) until unblocked.
Instead, consider using clojure.core.async/take! like so:
(take! channel (fn [value] (do-something-with value)))
As far as my knowledge about semaphores goes, a semaphore is used to protect resources which can be counted and are vulnerable to race conditions. But while reading the SBCL documentation of semaphores I could not figure out, how to properly use the provided semaphore implementation to protect a resource.
A usual work flow, as I recall would be:
a process wants to retrieve some of the by the semaphore protected
data (which is for the sake of the example a trivial queue). As the
semaphore counter is 0, the process waits
another process puts something in the queue and as the semaphore is
incremented, a signal is sent to all waiting processes
Given the possibility of interleaving, one has to protect any of those resource accesses as they might not be in that order, or any linear order at all. Therefore e.g. Java interprets each class as an implicit monitor and provides a syncronized keyword with which a programmer can define a protected area which can only be accessed by one process at a time.
How to I emulate this functionality in common-lisp, as I am pretty sure my current code is as thread safe as without the semaphore, as the semaphore has no clue what code to protect.
;;the package
(defpackage :tests (:use :cl :sb-thread))
(in-package :tests)
(defclass thread-queue ()
((semaphore
:initform (make-semaphore :name "thread-queue-semaphore"))
(in-stack
:initform nil)
(out-stack
:initform nil)))
(defgeneric enqueue-* (queue element)
(:documentation "adds an element to the queue"))
(defgeneric dequeue-* (queue &key timeout)
(:documentation "removes and returns the first element to get out"))
(defmethod enqueue-* ((queue thread-queue) element)
(signal-semaphore (slot-value queue 'semaphore))
(setf (slot-value queue 'in-stack) (push element (slot-value queue 'in-stack))))
(defmethod dequeue-* ((queue thread-queue) &key timeout)
(wait-on-semaphore (slot-value queue 'semaphore) :timeout timeout)
(when (= (length (slot-value queue 'out-stack)) 0)
(setf (slot-value queue 'out-stack) (reverse (slot-value queue 'in-stack)))
(setf (slot-value queue 'in-stack) nil))
(let ((first (car (slot-value queue 'out-stack))))
(setf (slot-value queue 'out-stack) (cdr (slot-value queue 'out-stack)))
first))
(defparameter *test* (make-instance 'thread-queue))
(dequeue-* *test* :timeout 5)
(enqueue-* *test* 42)
(enqueue-* *test* 41)
(enqueue-* *test* 40)
(dequeue-* *test* :timeout 5)
(dequeue-* *test* :timeout 5)
(dequeue-* *test* :timeout 5)
(dequeue-* *test* :timeout 5)
What you already have is a semaphore of count = 0, on which consumers wait.
What you also need is an exclusive lock around access your stacks (perhaps one for each), or alternatively a lock-free queue. If you want/must use semaphores, a binary semaphore can serve as an exclusive lock.
EDIT:
In SBCL, you already have lock-free queues, you might want to use one of these instead of two stacks. Another possibility is to use atomic operations.
Finally, if that still doesn't suit you, use a mutex, wrapping code that acesses and updates the stacks inside with-mutex or with-recursive-lock.
Be sure to use the lock/mutex after waking up from the semaphore, not around the waiting for the semaphore, otherwise you lose the advantage that semaphores give you, which is the possibility of waking up multiple waiters in a row, instead of one at a time.
You can read all about these things in the SBCL manual.
Also, I think some work has been done to rename every lock-like thing in SBCL to lock, according to this blog post, but I don't know the status of it and it states that the old names will be supported for a while.
You'll almost surely also need a semaphore of count = limit for producers, to not exceed your queue limit.
In your enqueue-*, you should signal the semaphore after updating the queue. The setf is not needed, push already stores the new head of the list in place.
In your dequeue-*, length is a lengthy function when applied to lists, but checking if a list is empty is cheap with null or endp. Instead of taking the car and store the cdr, you can use pop, it does exactly that.
You need to hold a mutual exclusion semaphore (aka a 'mutex') for the duration of your queue operations. Use a SBCL mutex as such:
(defclass thread-queue ()
((lock :initform (sb-thread:make-mutex :name 'thread-queue-lock))
...))
(defmethod enqueue-* ((queue thread-queue) element)
(sb-thread:with-recursive-lock ((slot-value queue 'lock))
(setf (slot-value queue 'in-stack) (push element (slot-value queue 'in-stack)))))
* (defvar lock (sb-thread:make-mutex))
LOCK
* lock
#S(SB-THREAD:MUTEX
:NAME NIL
:%OWNER NIL
:LUTEX #<unknown pointer object, widetag=#x5E {11CEB15F}>)
* (sb-thread:with-recursive-lock (lock) 'foo)
FOO
* (sb-thread:with-recursive-lock (lock) (sb-thread:with-recursive-lock (lock) 'foo))
FOO
Presumably the with-recursive-lock macro will do the right thing (unlock the lock, using unwind-protect or some such) for a non-local exit.
This is the equivalent of Java synchronized - the above protects the enqueue-* method; you'd need to do it to every other method that can be called asynchronously.
currently, when I am experimenting the continuation in functional languages, my understanding is that a continuation records the current program counter and register files, and when a continuation is returned, then the PC and the registered files will be restored to the values it has recorded.
So in the following dumb example from Might's blog post,
; right-now : -> moment
(define (right-now)
(call-with-current-continuation
(lambda (cc)
(cc cc))))
; go-when : moment -> ...
(define (go-when then)
(then then))
; An infinite loop:
(let ((the-beginning (right-now)))
(display "Hello, world!")
(newline)
(go-when the-beginning)) ; here the-beginning continuation passed to go-when, which ultimately will have an continuation applied to an continuation, that returns a continuation, which will cause the the program point resumed to the PC and registers states recorded in it.
I am not sure my understanding right.. Please correct me if you think it is not.....
Program counter and register files are not what the continuation records.
The best way to describe the meaning of call-with-current-continuation is that it records the program context. For instance, suppose you're evaluating the program
(+ 3 (f (call-with-current-continuation g)))
In this case, the context of the call-with-current-continuation expression would be
(+ 3 (f [hole]))
That is, the stuff surrounding the current expression.
Call-with-current-continuation captures one of these contexts. Invoking a continuation causes the replacement of the current context with the one stored in the continuation.
The idea of a context is a lot like that of a stack, except that there's nothing special about function calls in contexts.
This is a very brief treatment. I strongly urge you to take a look at Shriram Krishnamurthi's (free, online) book PLAI, in particular Part VII, for a more detailed and careful look at this topic.