How to #everywhere include(iter) from a for loop - julia

say I have a function like this
function paths(loc)
if loc[end] == "/"
loc = loc[1:end-1]
end
[string(loc, script) for script in
["/s1.jl", "/s2.jl", "/s3.jl"]]
end
and I want do this:
for p in paths("/some/path")
#everywhere include(p)
end
I get the error:
ERROR: UndefVarError: p not defined
eval(::Module, ::Any) at ./boot.jl:235
eval_ew_expr at ./distributed/macros.jl:116 [inlined]
(::Base.Distributed.##135#136{Base.Distributed.#eval_ew_expr,Tuple{Expr},Array{Any,1}})() at ./distributed/remotecall.jl:314
run_work_thunk(::Base.Distributed.##135#136{Base.Distributed.#eval_ew_expr,Tuple{Expr},Array{Any,1}}, ::Bool) at ./distributed/process_messages.jl:56
#remotecall_fetch#140(::Array{Any,1}, ::Function, ::Function, ::Base.Distributed.LocalProcess, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:339
remotecall_fetch(::Function, ::Base.Distributed.LocalProcess, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:339
#remotecall_fetch#144(::Array{Any,1}, ::Function, ::Function, ::Int64, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:367
remotecall_fetch(::Function, ::Int64, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:367
(::##189#191)() at ./distributed/macros.jl:102
#remotecall_fetch#140(::Array{Any,1}, ::Function, ::Function, ::Base.Distributed.LocalProcess, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:340
remotecall_fetch(::Function, ::Base.Distributed.LocalProcess, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:339
#remotecall_fetch#144(::Array{Any,1}, ::Function, ::Function, ::Int64, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:367
remotecall_fetch(::Function, ::Int64, ::Expr, ::Vararg{Expr,N} where N) at ./distributed/remotecall.jl:367
(::##189#191)() at ./distributed/macros.jl:102
Stacktrace:
[1] sync_end() at ./task.jl:287
[2] macro expansion at ./distributed/macros.jl:112 [inlined]
[3] macro expansion at ./REPL[33]:2 [inlined]
[4] anonymous at ./<missing>:?
The question is, if this is possible, how do I get this to work?

Replace #everywhere include(p) with #everywhere include($p). The $ is interpolation, which substitutes the p symbol with its value in the expression parameter to #everywhere.

Related

Isabelle order type-class on lambda expressions

I found this expression somewhere in Isabelle's standard library and tried to see what value does with it
value "(λ x::bool . ¬x) ≤ (λ x . x)"
It outputs False. What is the meaning of ≤ here? Ideally, where can I find the exact instantiation of it? When I Ctrl+Click on the lambda symbol, jEdit doesn't take me anywhere. Is λ part of meta logic then? Where is it defined?
This and many other things are defined in Lattices.thy theory in Main library
https://isabelle.in.tum.de/library/HOL/HOL/Lattices.html
under the following section.
subsection ‹Lattice on \<^typ>‹_ ⇒ _››
instantiation "fun" :: (type, semilattice_sup) semilattice_sup
begin
definition "f ⊔ g = (λx. f x ⊔ g x)"
lemma sup_apply [simp, code]: "(f ⊔ g) x = f x ⊔ g x"
by (simp add: sup_fun_def)
instance
by standard (simp_all add: le_fun_def)
end

Function termination proof in Isabelle

I have datatype stack_op which consists of several (~20) cases. I'm trying write function which skips some of that cases in list:
function (sequential) skip_expr :: "stack_op list ⇒ stack_op list" where
"skip_expr [] = []"
| "skip_expr ((stack_op.Unary _)#other) = (skip_expr other)"
| "skip_expr ((stack_op.Binary _)#other) = skip_expr (skip_expr other)"
| "skip_expr ((stack_op.Value _)#other) = other"
| "skip_expr other = other"
by pat_completeness auto termination by lexicographic_order
which seems to always terminate. But trying by lexicographic order generates such unresolved cases:
Calls:
c) stack_op.Binary uv_ # other ~> skip_expr other
Measures:
1) size_list size
2) length
Result matrix:
1 2
c: ? ?
(size_change also desn't work)
I've read https://isabelle.in.tum.de/dist/Isabelle2021/doc/functions.pdf, but it couldn't help. (Maybe there are more complex examples of tremination use?)
I tried to rewrite function adding another param:
function (sequential) skip_expr :: "stack_op list ⇒ nat ⇒ stack_op list" where
"skip_expr l 0 = l"
| "skip_expr [] _ = []"
| "skip_expr ((stack_op.Unary _)#other) depth = (skip_expr other (depth - 1))"
| "skip_expr ((stack_op.Binary _)#other) depth =
(let buff1 = (skip_expr other (depth - 1))
in (skip_expr buff1 (length buff1)))"
| "skip_expr ((stack_op.Value _)#other) _ = other"
| "skip_expr other _ = other"
by pat_completeness auto
termination by (relation "measure (λ(_,dep). dep)") auto
which generates unresolved subgoal:
1. ⋀other v. skip_expr_dom (other, v) ⟹ length (skip_expr other v) < Suc v
which I also don't how to proof.
Could anyone how such cases solved (As I can understand there is some problem with two-level recursive call on rigth side of stack_op.Binary case)? Or maybe there is another way to make such skip?
Thanks in advance
The lexicographic_order method simply tries to solve the arising goals with the simplifier, so if the simplifier gets stuck you end up with unresolved termination subgoals.
In this case, as you identified correctly, the problem is that you have a nested recursive call skip_expr (skip_expr other). This is always problematic because at this stage, the simplifier knows nothing about what skip_expr does to the input list. For all we know, it might just return the list unmodified, or even a longer list, and then it surely would not terminate.
Confronting the issue head on
The solution is to show something about length (skip_expr …) and make that information available to the simplifier. Because we have not yet shown termination of the function, we have to use the skip_expr.psimps rules and the partial induction rule skip_expr.pinduct, i.e. every statement we make about skip_expr xs always has as a precondition that skip_expr actually terminates on the input xs. For this, there is the predicate skip_expr_dom.
Putting it all together, it looks like this:
lemma length_skip_expr [termination_simp]:
"skip_expr_dom xs ⟹ length (skip_expr xs) ≤ length xs"
by (induction xs rule: skip_expr.pinduct) (auto simp: skip_expr.psimps)
termination skip_expr by lexicographic_order
Circumventing the issue
Sometimes it can also be easier to circumvent the issue entirely. In your case, you could e.g. define a more general function skip_exprs that skips not just one instruction but n instructions. This you can define without nested induction:
fun skip_exprs :: "nat ⇒ stack_op list ⇒ stack_op list" where
"skip_exprs 0 xs = xs"
| "skip_exprs (Suc n) [] = []"
| "skip_exprs (Suc n) (Unary _ # other) = skip_exprs (Suc n) other"
| "skip_exprs (Suc n) (Binary _ # other) = skip_exprs (Suc (Suc n)) other"
| "skip_exprs (Suc n) (Value _ # other) = skip_exprs n other"
| "skip_exprs (Suc n) xs = xs"
Equivalence to your skip_expr is then straightforward to prove:
lemma skip_exprs_conv_skip_expr: "skip_exprs n xs = (skip_expr ^^ n) xs"
proof -
have [simp]: "(skip_expr ^^ n) [] = []" for n
by (induction n) auto
have [simp]: "(skip_expr ^^ n) (Other # xs) = Other # xs" for xs n
by (induction n) auto
show ?thesis
by (induction n xs rule: skip_exprs.induct)
(auto simp del: funpow.simps simp: funpow_Suc_right)
qed
lemma skip_expr_Suc_0 [simp]: "skip_exprs (Suc 0) xs = skip_expr xs"
by (simp add: skip_exprs_conv_skip_expr)
In your case, I don't think it actually makes sense to do this because figuring out the termination is fairly easy, but it may be good to keep in mind.

Reasoning about overlapping inductive definitions in Isabelle

I would like to prove the following lemma in Isabelle:
lemma "T (Open # xs) ⟹ ¬ S (Open # xs) ⟹ count xs Close ≤ count xs Open"
Please find the definitions below:
datatype paren = Open | Close
inductive S where
S_empty: "S []" |
S_append: "S xs ⟹ S ys ⟹ S (xs # ys)" |
S_paren: "S xs ⟹ S (Open # xs # [Close])"
inductive T where
T_S: "T []" |
T_append: "T xs ⟹ T ys ⟹ T (xs # ys)" |
T_paren: "T xs ⟹ T (Open # xs # [Close])" |
T_left: "T xs ⟹ T (Open # xs)"
The lemma states that an unbalanced parentheses structure would result in a possibly unbalanced structure when removing an Open bracket.
I've been trying the techniques that are described in the book "A proof-assistant for Higher-order logic", but so far none of them work. In particular, I tried to use rule inversion and rule induction, sledgehammer and other techniques.
One of the problems is that I haven't yet learned about Isar proofs, which thus complicates the proof. I would prefer if you can orient me with plain apply commands.
Please find a proof below. It is not unlikely that it can be improved: I tried to follow the simplest route towards the proof and relied on sledgehammer to fill in the details.
theory so_raoidii
imports Complex_Main
begin
datatype paren = Open | Close
inductive S where
S_empty: "S []" |
S_append: "S xs ⟹ S ys ⟹ S (xs # ys)" |
S_paren: "S xs ⟹ S (Open # xs # [Close])"
inductive T where
T_S: "T []" |
T_append: "T xs ⟹ T ys ⟹ T (xs # ys)" |
T_paren: "T xs ⟹ T (Open # xs # [Close])" |
T_left: "T xs ⟹ T (Open # xs)"
lemma count_list_lem:
"count_list xsa a = n ⟹
count_list ysa a = m ⟹
count_list (xsa # ysa) a = n + m"
apply(induction xsa arbitrary: ysa n m)
apply auto
done
lemma T_to_count: "T xs ⟹ count_list xs Close ≤ count_list xs Open"
apply(induction rule: T.induct)
by (simp add: count_list_lem)+
lemma T_to_S_count: "T xs ⟹ count_list xs Close = count_list xs Open ⟹ S xs"
apply(induction rule: T.induct)
apply(auto)
apply(simp add: S_empty)
apply(metis S_append T_to_count add.commute add_le_cancel_right count_list_lem
dual_order.antisym)
apply(simp add: count_list_lem S_paren)
using T_to_count by fastforce
lemma "T (Open # xs) ⟹
¬ S (Open # xs) ⟹
count_list xs Close ≤ count_list xs Open"
apply(cases "T xs")
apply(simp add: T_to_count)
using T_to_S_count T_to_count by fastforce
end

set integrable with functions multiplication

I'm trying to prove this lemma:
lemma set_integral_mult:
fixes f g :: "_ ⇒ _ :: {banach, second_countable_topology}"
assumes "set_integrable M A (λx. f x)" "set_integrable M A (λx. g x)"
shows "set_integrable M A (λx. f x * g x)"
and
lemma set_integral_mult1:
fixes f :: "_ ⇒ _ :: {banach, second_countable_topology}"
assumes "set_integrable M A (λx. f x)"
shows "set_integrable M A (λx. f x * f x)"
but I couldn't. I've seen that it is proved for addition and subtraction:
lemma set_integral_add [simp, intro]:
fixes f g :: "_ ⇒ _ :: {banach, second_countable_topology}"
assumes "set_integrable M A f" "set_integrable M A g"
shows "set_integrable M A (λx. f x + g x)"
and "LINT x:A|M. f x + g x = (LINT x:A|M. f x) + (LINT x:A|M. g x)"
using assms by (simp_all add: scaleR_add_right)
lemma set_integral_diff [simp, intro]:
assumes "set_integrable M A f" "set_integrable M A g"
shows "set_integrable M A (λx. f x - g x)" and "LINT x:A|M. f x - g x =
(LINT x:A|M. f x) - (LINT x:A|M. g x)"
using assms by (simp_all add: scaleR_diff_right)
or even for scalar multiplication but not for two functions multiplication?
The problem is that it is quite simply not true. The function f(x) = 1 / sqrt(x) is integrable on the set (0;1], and the integral has the value 2. Its square f(x)² = 1 / x on the other hand is not integrable on the set (0;1]. The integral diverges.

Clojure: update-in, but with wildcards and path tracking

I'm trying to update values in a structure consisting of nested maps and sequences, but update-in won't work because I want to allow wildcards. My manual approach led me to ugly, big, nested for and into {} calls. I ended up making a function that takes the structure, a selector-like sequence, and an update function.
(defn update-each-in
([o [head & tail :as path] f]
(update-each-in o path f []))
([o [head & tail :as path] f current-path]
(cond
(empty? path) (f o current-path)
(identical? * head)
(cond
(map? o)
(into {} (for [[k v] o]
[k (update-each-in v tail f (conj current-path k))]))
:else (for [[i v] (map-indexed vector o)]
(update-each-in v tail f (conj current-path i))))
:else (assoc o head
(update-each-in (get o head) tail f (conj current-path head))))))
This allows me to simplify my updates to the following
(def sample {"TR" [{:geometry {:ID12 {:buffer 22}}}
{:geometry {:ID13 {:buffer 33}
:ID14 {:buffer 55}}}
{:geometry {:ID13 {:buffer 44}}}]
"BR" [{:geometry {:ID13 {:buffer 22}
:ID18 {:buffer 11}}}
{:geometry {:ID13 {:buffer 33}}}
{:geometry {:ID13 {:buffer 44}}}]})
(update-each-in sample [* * :geometry * :buffer]
(fn [buf path] (inc buf)))
Obviously this has a stack overflow problem with deeply nested structures; although I'm far from hitting that one, it'd be nice to have a robust solution. Can anyone suggest a simpler/faster/more elegant solution? Could this be done with reducers/transducers?
UPDATE It's a requirement that the updating function also gets the full path to the value it's updating.
update-in has exactly the same signature as the function you created, and it does almost exactly the same thing. There are two differences: it doesn't allow wildcards in the "path," and it doesn't pass intermediary paths to the update function.
Adding wildcards to update-in
I've adapted this from the source code for update-in.
(defn update-in-*
[m [k & ks] f & args]
(if (identical? k *)
(let [idx (if (map? m) (keys m) (range (count m)))]
(if ks
(reduce #(assoc % %2 (apply update-in-* (get % %2) ks f args))
m
idx)
(reduce #(assoc % %2 (apply f (get % %2) args))
m
idx)))
(if ks
(assoc m k (apply update-in-* (get m k) ks f args))
(assoc m k (apply f (get m k) args)))))
Now these two lines produce the same result:
(update-in-* sample [* * :geometry * :buffer] (fn [buf] (inc buf)))
(update-each-in sample [* * :geometry * :buffer] (fn [buf path] (inc buf)))
The change I made to update-in is just by branching on a check for the wildcard. If the wildcard is encountered, then every child-node at that level must be modified. I used reduce to keep the cumulative updates to the collection.
Also, another remark, in the interests of robustness: I'd try to use something other than * for the wildcard. It could possibly occur as the key in a map.
Adding path-tracking to update-in
If it is required that the updating function receive the full path, then I would just modify update-in one more time. The function signature changes and (conj p k) gets added, but that's about it.
(defn update-in-*
[m ks f & args] (apply update-in-*-with-path [] m ks f args))
(defn- update-in-*-with-path
[p m [k & ks] f & args]
(if (identical? k *)
(let [idx (if (map? m) (keys m) (range (count m)))]
(if ks
(reduce #(assoc % %2 (apply update-in-*-with-path (conj p k) (get % %2) ks f args))
m
idx)
(reduce #(assoc % %2 (apply f (conj p k) (get % %2) args))
m
idx)))
(if ks
(assoc m k (apply update-in-*-with-path (conj p k) (get m k) ks f args))
(assoc m k (apply f (conj p k) (get m k) args)))))
Now these two lines produce the same result:
(update-in-* sample [* * :geometry * :buffer] (fn [path val] (inc val)))
(update-each-in sample [* * :geometry * :buffer] (fn [buf path] (inc buf)))
Is this better than your original solution? I don't know. I like it because it is modeled after update-in, and other people have probably put more careful thought into update-in than I care to myself.

Resources