Some samples I use from Freesound.org have a slight click at the end, e.g.:
repl> (use 'overtone.live)
nil
repl> (def stick (freesound 82280))
#'repl/stick
repl> (stick)
So I'm trying to wrap this sample in an envelope, however all I get is silence. I suspect there's something wrong with my use of buf-rd...
(definst stick1
[amp 0.7]
(let [env (env-gen (perc) :action FREE)
phase (phasor:ar :start 0 :end 1 :rate 1)
index (* phase (buf-frames stick))
snd (buf-rd 1 stick index)]
(* amp env snd)))
(stick1)
play-buf is the correct function to encorporate a sample in an envelope. perc is used to set an attack of 0.01 seconds, and a release of 1 second before silence, thus killing the click.
(def stick (freesound 82280))
(definst stick1
[amp 0.7]
(let [env (env-gen (perc 0.01 1) :action FREE)
snd (play-buf 1 stick)]
(* amp env snd)))
(stick1)
Related
I'm playing with clojure to do a script to read as input a sequence of URIs from a file and do a report on the status code for them.
I've implemented this using clojure.core.async/pipeline-async to execute the HTTP call to the URI (using an httpkit async call).
I want to monitor the execution of the script so I've an atom for the state:
(let [processing (atom [(System/currentTimeMillis) 0])]
and a function to track the progress.
(defn track-progress [total progress]
(swap! progress
(fn [[time count]]
(let [incremented-count (inc count)
now (System/currentTimeMillis)]
(if (= 0 (mod incremented-count (max 1 (int (/ total 20)))))
(do
(println (str "Progress " incremented-count "/" total " | " (- now time) "ms"))
[now incremented-count])
[time incremented-count])))))
Using it after the HTTP call:
(a/pipeline-async
parallelism
output-chan
(fn [[an-id uri] result]
(http/head uri {:throw-exceptions false
:timeout timeout}
(fn [{:keys [status error]}]
(track-progress total processing)
(a/go
(if (nil? error)
(do (a/>! result [an-id (keyword (str status))])
(a/close! result))
(do (a/>! result [an-id :error])
(a/close! result)))))))
input-chan)
The processing atom is created in a let expression, using that pipeline-async part.
Everything seems working fine, apart from that log.
I found out that sometimes the logging is very weird, having stuffs like this:
Progress 500/10000 | 11519ms
Progress 500/10000 | 11519msProgress 500/10000 | 11519ms
Progress 1000/10000 | 11446ms
Progress 1000/10000 | 11446ms
Progress 1500/10000 | 9503ms
Progress 2000/10000 | 7802ms
Progress 2500/10000 | 12822ms
Progress 2500/10000 | 12822msProgress 2500/10000 | 12822ms
Progress 2500/10000 | 12822ms
Progress 3000/10000 | 10623ms
Progress 3500/10000 | 9018ms
Progress 4000/10000 | 9618ms
Progress 4500/10000 | 13544ms
Progress 5000/10000 | 10541ms
Progress 5500/10000 | 10817ms
Progress 6000/10000 | 8921ms
Progress 6500/10000 | 9078ms
Progress 6500/10000 | 9078ms
Progress 7000/10000 | 9270ms
Progress 7500/10000 | 11826msProgress 7500/10000 | 11826msProgress 7500/10000 | 11826ms
The output is formatted as it is wrote in the shell, it seems that sometimes the same println is executed multiple times, or the fn passed to the swap! function is executed in parallel (no concurrency) in the atom.
(If the the println I remove the str to create the string to print, the lines in which I have the same progress multiple times are totally mixed up like ProgressProgress 7500/10000 | 11826ms7500/100007500 | 11826msProgress/10000 | 11826ms)
Is it something wrong with my code?
Or I am getting the atom wrong, as I supposed it not allows the parallel execution of a function changing its state?
A Clojure atom is designed specifically so that in a multi-threaded program, there can be multiple threads executing swap! on a single atom, and if your program does this, those update functions f given to swap! can run simultaneously. The only part of swap! that is synchronized is a 'compare and swap' operation that effectively does:
lock the atom's state
check if its current value is identical? to the reference it contained before f began executing, and if it is, replace it with the new object returned by f.
Unlock the atom's state".
The function f may take a long time to calculate a new value from the current one, but the critical section above is merely a pointer comparison, and if equal, a pointer assignment.
That is why the doc string for swap! says "Note that f may be called multiple times, and thus should be free of side effects."
What you want is to serialize the output stream from a group of concurrently executing threads. You could use an agent to serialize access to a piece of mutable state, but here you have a degenerate case without state, only with side-effects. For this case, the locking function is all you need.
An example:
(ns tst.demo.core
(:use demo.core tupelo.core tupelo.test))
(defn do-println
[& args]
(apply println args))
(def lock-obj (Object.))
(defn do-println-locking
[& args]
(locking lock-obj
(apply println args)))
(def sleep-millis 500)
(defn wait-and-print
[print-fn id]
(Thread/sleep sleep-millis)
(print-fn (format "wait-and-print %s is complete" id)))
(defn start-threads
[print-fn count]
(println "-----------------------------------------------------------------------------")
(let [futures (forv [i (range count)]
(future (wait-and-print print-fn i)))]
(doseq [future futures]
; block until future is complete
(deref future))))
(dotest
(start-threads do-println 10)
(start-threads do-println-locking 10))
Typical result:
--------------------------------------
Clojure 1.10.2-alpha1 Java 15
--------------------------------------
Testing tst.demo.core
-----------------------------------------------------------------------------
wait-and-print 4 is completewait-and-print 3 is completewait-and-print 2 is complete
wait-and-print 8 is completewait-and-print 9 is complete
wait-and-print 6 is completewait-and-print 1 is complete
wait-and-print 7 is complete
wait-and-print 0 is complete
wait-and-print 5 is complete
-----------------------------------------------------------------------------
wait-and-print 5 is complete
wait-and-print 8 is complete
wait-and-print 7 is complete
wait-and-print 9 is complete
wait-and-print 6 is complete
wait-and-print 3 is complete
wait-and-print 0 is complete
wait-and-print 4 is complete
wait-and-print 2 is complete
wait-and-print 1 is complete
So you can see the output without serialization from locking is jumbled, while each println in the 2nd case is allowed to complete one-at-a-time (even though the order is still random).
If println printed one char at a time instead of one string at a time, the results in the unsynchronized case would be even more jumbled. Modify the output functions to print each character separately:
(defn do-println
[& args]
(doseq [ch (str/join args)]
(print ch))
(newline))
(def lock-obj (Object.))
(defn do-println-locking
[& args]
(locking lock-obj
(apply do-println args)))
with typical result:
--------------------------------------
Clojure 1.10.2-alpha1 Java 15
--------------------------------------
Testing tst.demo.core
-----------------------------------------------------------------------------
wwwwwaaawwiiiattti--taaa--nnaiddnaa--dwpp-irrptaiir-niiantnttn -dw2ta- ani96ipds trn- i-pcndrota-impn nrpd4itl- n eipt5tr s e7i
incisots mc0cpo olmmieppstll ee
etctteo
e-
amnidps-l pectroeai
intt- a1n di-sip rcsio nmctmpo plm3lew etaiei
spt t-lceeatone
d
m-pplreitnet
8 is complete
-----------------------------------------------------------------------------
wait-and-print 3 is complete
wait-and-print 9 is complete
wait-and-print 8 is complete
wait-and-print 4 is complete
wait-and-print 6 is complete
wait-and-print 7 is complete
wait-and-print 0 is complete
wait-and-print 1 is complete
wait-and-print 5 is complete
wait-and-print 2 is complete
but we see that locking serializes the function calls so that the active call must complete before the next can begin.
I'm new to F#, and functional languages. So this might be stupid question, or duplicated with this Recursive objects in F#?, but I don't know.
Here is a simple Fibonacci function:
let rec fib n =
match n with
| 0 -> 1
| 1 -> 1
| _ -> fib (n - 1) + fib (n - 2)
Its signature is int -> int.
It can be rewritten as:
let rec fib =
fun n ->
match n with
| 0 -> 1
| 1 -> 1
| _ -> fib (n - 1) + fib (n - 2)
Its signature is (int -> int) (in Visual Studio for Mac).
So what's the difference with the previous one?
If I add one more line like this:
let rec fib =
printfn "fib" // <-- this line
fun n ->
match n with
| 0 -> 1
| 1 -> 1
| _ -> fib (n - 1) + fib (n - 2)
The IDE gives me a warning:
warning FS0040: This and other recursive references to the object(s) being defined will be checked for initialization-soundness at runtime through the use of a delayed reference. This is because you are defining one or more recursive objects, rather than recursive functions. This warning may be suppressed by using '#nowarn "40"' or '--nowarn:40'.
How does this line affect the initialization?
What does "recursive object" mean? I can't find it in the documentation.
Update
Thanks for your replies, really nice explanation.
After reading your answers, I have some ideas about the Recursive Object.
First, I made a mistake about the signature. The first two code snippets above have a same signature, int -> int; but the last has signature (int -> int) (note: the signatures have different representation in vscode with Ionide extension).
I think the difference between the two signatures is, the first one means it's just a function, the other one means it's a reference to a function, that is, an object.
And every let rec something with no parameter-list is an object rather than a function, see the function definition, while the second snippet is an exception, possibly optimized by the compiler to a function.
One example:
let rec x = (fun () -> x + 1)() // same warning, says `x` is an recursive object
The only one reason I can think of is the compiler is not smart enough, it throws an warning just because it's a recursive object, like the warning indicates,
This is because you are defining one or more recursive objects, rather than recursive functions
even though this pattern would never have any problem.
let rec fib =
// do something here, if fib invoked here directly, it's definitely an error, not warning.
fun n ->
match n with
| 0 -> 1
| 1 -> 1
| _ -> fib (n - 1) + fib (n - 2)
What do you think about this?
"Recursive objects" are just like recursive functions, except they are, well, objects. Not functions.
A recursive function is a function that references itself, e.g.:
let rec f x = f (x-1) + 1
A recursive object is similar, in that it references itself, except it's not a function, e.g.:
let rec x = x + 1
The above will actually not compile. The F# compiler is able to correctly determine the problem and issue an error: The value 'x' will be evaluated as part of its own definition. Clearly, such definition is nonsensical: in order to calculate x, you need to already know x. Does not compute.
But let's see if we can be more clever. How about if I close x in a lambda expression?
let rec x = (fun() -> x + 1) ()
Here, I wrap the x in a function, and immediately call that function. This compiles, but with a warning - the same warning that you're getting, something about "checking for initialization-soundness at runtime".
So let's go to runtime:
> let rec x = (fun() -> x + 1) ()
System.InvalidOperationException: ValueFactory attempted to access the Value property of this instance.
Not surprisingly, we get an error: turns out, in this definition, you still need to know x in order to calculate x - same as with let rec x = x + 1.
But if this is the case, why does it compile at all? Well, it just so happens that, in general, it is impossible to strictly prove that x will or will not access itself during initialization. The compiler is just smart enough to notice that it might happen (and this is why it issues the warning), but not smart enough to prove that it will definitely happen.
So in cases like this, in addition to issuing a warning, the compiler will install a runtime guard, which will check whether x has already been initialized when it's being accessed. The compiled code with such guard might look something like this:
let mutable x_initialized = false
let rec x =
let x_temp =
(fun() ->
if not x_initialized then failwith "Not good!"
else x + 1
) ()
x_initialized <- true
x_temp
(the actual compiled code looks differently of course; use ILSpy to look if you're curious)
In certain special cases, the compiler can prove one way or another. In other cases it can't, so it installs runtime protection:
// Definitely bad => compile-time error
let rec x = x + 1
// Definitely good => no errors, no warnings
let rec x = fun() -> x() + 1
// Might be bad => compile-time warning + runtime guard
let rec x = (fun() -> x+1) ()
// Also might be bad: no way to tell what the `printfn` call will do
let rec x =
printfn "a"
fun() -> x() + 1
There's a major difference between the last two versions. Notice adding a printfn call to the first version generates no warning, and "fib" will be printed each time the function recurses:
let rec fib n =
printfn "fib"
match n with
| 0 -> 1
| 1 -> 1
| _ -> fib (n - 1) + fib (n - 2)
> fib 10;;
fib
fib
fib
...
val it : int = 89
The printfn call is part of the recursive function's body. But the 3rd/final version only prints "fib" once when the function is defined then never again.
What's the difference? In the 3rd version you're not defining just a recursive function, because there are other expressions creating a closure over the lambda, resulting in a recursive object. Consider this version:
let rec fib3 =
let x = 1
let y = 2
fun n ->
match n with
| 0 -> x
| 1 -> x
| _ -> fib3 (n - x) + fib3 (n - y)
fib3 is not a plain recursive function; there's a closure over the function capturing x and y (and same for the printfn version, although it's just a side-effect). This closure is the "recursive object" referred to in the warning. x and y will not be redefined in each recursion; they're part of the root-level closure/recursive object.
From the linked question/answer:
because [the compiler] cannot guarantee that the reference won't be accessed before it is initialized
Although it doesn't apply in your particular example, it's impossible for the compiler to know whether you're doing something harmless, or potentially referencing/invoking the lambda in fib3 definition before fib3 has a value/has been initialized. Here's another good answer explaining the same.
I'd like to run a code like
(->> input
(partition-all 5)
(map a-side-effect)
dorun)
asynchronously dividing input and output(a-side-effect).
Then I've written the code to experiment below.
;; using boot-clj
(set-env! :dependencies '[[org.clojure/core.async "0.2.374"]])
(require '[clojure.core.async :as async :refer [<! <!! >! >!!]])
(let [input (range 18)
c (async/chan 1 (comp (partition-all 5)
(map prn)))]
(async/onto-chan c input false)
(async/close! c))
explanation for this code:
Actually elements in input and its quantity is not defined before running and elements in input is able to be taken by some numbers from 0 to 10.
async/onto-chan is used to put a Seq of elements (a fragment of input) into the channel c and will be called many times thus the 3rd argument is false.
prn is a substitute for a-side-effect.
I expected the code above prints
[0 1 2 3 4]
[5 6 7 8 9]
[10 11 12 13 14]
[15 16 17]
in REPL however it prints no characters.
And then I add a time to wait, like this
(let [c (async/chan 1 (comp (partition-all 5)
(map prn)))]
(async/onto-chan c (range 18) false)
(Thread/sleep 1000) ;wait
(async/close! c))
This code gave my expected output above.
And then I inspect core.async/onto-chan.
And I think what happend:
the channel c was core.async/close!ed in my code.
each item of the argument of core.async/onto-chan was put(core.async/>!) in vain in the go-loop in onto-chan because the channel c was closed.
Are there sure ways to put items before close!ing?
write a synchronous version of onto-chan not using go-loop?
Or is my idea wrong?
Your second example with Thread.sleep only ‘works’ by mistake.
The reason it works is that every transformed result value that comes out of c’s transducer is nil, and since nils are not allowed in channels, an exception is thrown, and no value is put into c: this is what allows the producer onto-chan to continue putting into the channel and not block waiting. If you paste your second example into the REPL you’ll see four stack traces – one for each partition.
The nils are of course due to mapping over prn, which is a side-effecting function that returns nil for all inputs.
If I understand your design correctly, your goal is to do something like this:
(defn go-run! [ch proc]
(async/go-loop []
(when-let [value (<! ch)]
(proc value)
(recur))))
(let [input (range 18)
c (async/chan 1 (partition-all 5))]
(async/onto-chan c input)
(<!! (go-run! c prn)))
You really do need a producer and a consumer, else your program will block. I’ve introduced a go-loop consumer.
Very generally speaking, map and side-effects don’t go together well, so I’ve extracted the side-effecting prn into the consumer.
onto-chan cannot be called ‘many times’ (at least in the code shown) so it doesn’t need the false argument.
taking megakorre's idea:
(let [c (async/chan 1 (comp (partition-all 5)
(map prn)))
put-ch (async/onto-chan c (range 18) false)]
(async/alts!! [put-ch])
(async/close! c))
I'm reading Concepts, Techniques, and Models of Computer Programming, and there's a code at the beginning that I just cannot understand no matter how hard I try.
declare Pascal AddList ShiftLeft ShiftRight
fun {Pascal N}
if N==1 then [1]
else
L in
L = {Pascal N-1} % Recursion
{AddList {ShiftLeft L}
{ShiftRight L}}
end
end
fun {ShiftLeft L}
case L of H|T then
H|{ShiftLeft T} % Recursion
else [0]
end
end
fun {ShiftRight L}
0 | L
end
fun {AddList L1 L2}
case L1 of H1|T1 then
case L2 of H2|T2
then
H1+H2|{AddList T1 T2} % Recursion
end
else nil
end
end
I kind of get the language constructs (this is the introduction to it), but the thing that really stands in my way is the recursion.
I'm trying to put a label on each recursion call that will abstractly say what goes in here, but I just can't figure it out.
What I ask for is a clear and easy explanations of how these functions work.
Start with N == 1: This is simple. The result is just [1].
Now check for N == 2:
First we calculate L = {Pascal N-1} = {Pascal 2-1} = {Pascal 1} = [1]
Now shifted to the left: [1 0]
Shifted to the right: [0 1]
AddList just adds elementwise. So the result for {Pascal 2} is [1 1].
Now for for N == 3:
{Pascal 2} = [1 1]
Shifted left: [1 1 0]
Shifted right: [0 1 1]
Added: [1 2 1]
Of course the program works the other way around: It starts with some larger N. But at the beginning of the Pascal function the program recurses repeatedly until the parameter N has become 1. Something like this:
{Pascal 3}
{Pascal 2}
{Pascal 1}
[1]
[1 1]
[1 2 1]
Edit: There are actually to kinds of recursion in the program. The first one in Pascal starts with some integer N and recurses down to 1.
The other (in the helper methods) starts with a list consisting of a head and a tail and stops as soon as the list is empty, i.e. cannot be split anymore. (This is using so-called cons lists, an intrinsically recursive data type.)
wmeyer's explanation is very nice. I just want to add a possibly helpful 'visualization' -->
First of all, I'm using the original version of the book (PDF), I beleive, and the functions look like this -->
declare Pascal AddList ShiftLeft ShiftRight
fun {Pascal N}
if N==1 then [1]
else
{AddList {ShiftLeft {Pascal N-1}} {ShiftRight {Pascal N-1}}}
end
end
fun {ShiftLeft L}
case L of H|T then
H|{ShiftLeft T}
else [0] end
end
fun {ShiftRight L} 0|L end
fun {AddList L1 L2}
case L1 of H1|T1 then
case L2 of H2|T2 then
H1+H2|{AddList T1 T2}
end
else nil end
end
Imagine that you want to see row eight of Pascal's triangle. You are going to enter:
{Browse {Pascal 8}}
i.e. you want to display the result of feeding 8 to the function Pascal as defined in the book/here.
First the function tests to see if the value it was just passed is 1 (which will not be true until the LAST iteration of the recursion (or the final recursive call(s)) at which time that [1] (from if N==1) will be returned as the output of THAT CALL OF Pascal and passed back up the 'chain' of executions (of Pascal) to the next most recent call first (where that result, [1], is added to the result of the matching ShiftLeft or ShiftRight, and then THAT result is sent back up the chain, and on and on, until it reaches the first one (Pascal 8). So the calls go 8 levels deep, then pass the answers back up those levels until you get the final answer... but I've jumped ahead.
Ok, since you fed an 8, the test N==1 fails, and therefore instead of being able to shift 'the lists' and add them together in the else clause right away, the function not being able to do that with undefined terms in the 'equations' says "I'll try N - 1! Maybe THAT will be the final answer!!" (for ShiftLeft AND ShiftRight - so this branching occurs each time the recursino happens)
So, the function waits for that answer from Pascal N-1 inside ShiftLeft and ShiftRight... waiting, waiting...
Well, {Pascal 7} won't be true for N==1 either, so the newer calls ("calls", 2nd AND 3rd calls, left and right!) of Pascal will BOTH also ask "What is Pascal N - 1" (7-1 this time) and they will both wait for the answer...
This goes on and on and on and on.... oh wait, until N==1!
Then [1], a list, is returned BACK UP THE CHAIN... so, each successive waiting function call, most recent first (keep in mind all these happen more and more on the way down to get here to the 'bottom' where N==1 as the splits increase (by calling ShiftLeft and ShiftRight at one time each call)) can finally make it's AddList calculation with the answers it has been waiting on from it's own personal, private calls to ShiftLeft and ShiftRight.
Everything goes all the way down to the bottom, splitting into more and more function calls, then we come back to the top and finally can get an answer returned. That final answer is the else clause of the first call to the Pascal function, {Pascal 8}, which now, inside, (since the 8th row of Pascal's triangle is [1 7 21 35 35 21 7 1]) will look like:
{AddList [1 7 21 35 35 21 7 0] [0 7 21 35 35 21 7 1]} <-- at least I think that's what the final lists to be added look like
Which once added is the one list returned as the final answer and displayed: [1 7 21 35 35 21 7 1]
(defmacro nif [expr pos zer neg]
'(condp = (Integer/signum ~expr)
-1 ~neg
0 ~zer
1 ~pos))
I get this error.
1:1 user=> #<Namespace Chapter7Macros>
1:2 Chapter7Macros=> (nif 1 (+ 2 2) (- 2 2) (- 3 2))
1:3 Chapter7Macros=> java.lang.Exception: Unable to resolve symbol: expr in this context (repl-1:57)
Replace the quote (') by a backtick (`) to enable syntax-quoting.
In general using (macroexpand-1 '(nif 1 ... )) will help a lot by showing you the code your macro is actually translating into.