Pseudocode: Understanding constraints let ... in - constraints

I'm reading a book about constraint based static code analysis. (Name: principles of program analysis) In there I found this code snippet:
let f = fn x => x 7
g = fn y => y
h = fn z => 3
in f g + f (g h)
It will be interpreted as:
f g + f (g h)
| |
v v
g 7 f h
|
v
h 7
I understand why f g will be g 7.
But why is f (g h) interpreted as f h? It should be g h 3, shouldn't it?

In expression f (g h) first g h is interpreted as h (according to g definition) and than f is applied.

Related

Commutativity of matrices in Isabelle/HOL

I'm trying to prove the following lemma:
lemma
fixes A B C D :: "((real, 3) vec, 3) vec"
and v m :: " (real, 3) vec"
assumes "∃ A. m = D ** A ** B *v v"
shows "∃ A. m = D ** B ** A *v v"
but I couldn't do so since ∃ A. D ** A**B *v v = D ** B**A *v v can be proved straight forward. Probably because of ∃ A!. Could anyone explain why Isabelle/HOL can't prove this?
Thanks

set integrable with functions multiplication

I'm trying to prove this lemma:
lemma set_integral_mult:
fixes f g :: "_ ⇒ _ :: {banach, second_countable_topology}"
assumes "set_integrable M A (λx. f x)" "set_integrable M A (λx. g x)"
shows "set_integrable M A (λx. f x * g x)"
and
lemma set_integral_mult1:
fixes f :: "_ ⇒ _ :: {banach, second_countable_topology}"
assumes "set_integrable M A (λx. f x)"
shows "set_integrable M A (λx. f x * f x)"
but I couldn't. I've seen that it is proved for addition and subtraction:
lemma set_integral_add [simp, intro]:
fixes f g :: "_ ⇒ _ :: {banach, second_countable_topology}"
assumes "set_integrable M A f" "set_integrable M A g"
shows "set_integrable M A (λx. f x + g x)"
and "LINT x:A|M. f x + g x = (LINT x:A|M. f x) + (LINT x:A|M. g x)"
using assms by (simp_all add: scaleR_add_right)
lemma set_integral_diff [simp, intro]:
assumes "set_integrable M A f" "set_integrable M A g"
shows "set_integrable M A (λx. f x - g x)" and "LINT x:A|M. f x - g x =
(LINT x:A|M. f x) - (LINT x:A|M. g x)"
using assms by (simp_all add: scaleR_diff_right)
or even for scalar multiplication but not for two functions multiplication?
The problem is that it is quite simply not true. The function f(x) = 1 / sqrt(x) is integrable on the set (0;1], and the integral has the value 2. Its square f(x)² = 1 / x on the other hand is not integrable on the set (0;1]. The integral diverges.

How to "de-nest" some nested lists

I have a recursive function that gives me this answer:
'((()) (((((a c d f e d))))) (((((a c d e f d))))))
Besides the fact that I need to look through the function in order to clean it up, that raises me the question: how to clean this answer up? How to "de-nest" those lists in order to return this:
'((a c d f e d) (a c d e f d))
I need some strategy or appropriate function in Racket or in Common Lisp.
Thanks in advance!
The behavior you want is a little bit unclear—if you just want to flatten a list, in Racket, you would just use the flatten function:
> (flatten '((()) (((((a c d f e d))))) (((((a c d e f d)))))))
'(a c d f e d a c d e f d)
However, it looks like you want to flatten each sublist, in which case you would want to just use map paired with flatten:
> (map flatten '((()) (((((a c d f e d))))) (((((a c d e f d)))))))
'(() (a c d f e d) (a c d e f d))
However, this still leaves the first empty list, which in your question it looks like you would like to remove. In that case, I would just add an additional filter step after flattening:
> (filter (negate empty?) (map flatten '((()) (((((a c d f e d))))) (((((a c d e f d))))))))
'((a c d f e d) (a c d e f d))
You could wrap this into a simple function that has the behavior you want:
(define (flatten-non-empty-sublists lst)
(filter (negate empty?) (map flatten lst)))
In common lisp, you can get the flatten function from alexandria, which you can get from quicklisp.

Isabelle: understanding the use of quantifiers

I have found that I can prove the following lemma, which seems false to me.
lemma assumes "∀a b. f a > f b ∧ a ≠ b"
shows "∀a b. f b > f a"
using assms by auto
How can the lemma above be true? Is Isabelle substituting values as I have used the ∀ quantifier? If so, I want to state the for all values of a and b, f(a) is greater than f(b), how would I do this?
Why does it seem false? You are stating that for ANY a and b, f a > f b and a ≠ b. This means that if say a = 0 and b = 1 then f 0 > f 1 but also when a = 1 and b = 0 it means that f 1 > f 0.
Furthermore, you assume ∀a b. f a > f b ∧ a ≠ b is true, this means you ASSUME that for any a and b, f a > f b AND a different from b. This is generally false as you can not have ∀a b. a ≠ b
Maybe what you meant to say was: ∀a b. (a ≠ b ==> f a > f b)? eg. for any a and b, if a ≠ b then f a > f b? Note that this still implies f b > f a as per the example above, it it really isn't saying anything meaningful.
The lemma you state is trivially true. Almost a direct instance of "A ==> A". From your assumption it may trivially be concluded that ∀a b. f a > f b. Then by renaming bound variables appropriately we obtain ∀b a. f b > f a. Moreover all-quantifiers may be reordered to obtain ∀a b. f b > f a.

Proving language properties

I am taking a course on the formal foundations of programming, one of things we have covered is proving certain properties of languages, i have done most of the work, but I am stuck on these two questions, as I have no idea how to prove them.
they are as follows:
A ^ (B ^ C) = (A ^ B) ^ C (which I believe is the associative rule)
A ^ (B U C) = (A ^ B) U ( A ^ C) (Distribution rule)
In these examples i have used the ^ to mean concatenation
First
A^B is all the words x such that there is v in A and w in B such that x = vw
let's prove A^(B^C) is included into (A^B)^C
The A^(B^C) is all the words x such that there is v in A and w in B^C
such that x=vw
and w = lm where l is in B and m is in C then x=vlm
x=(vl)m =v(lm) since vl is in A^B qnd m is in C then x is in (A^B)^C.
then A^(B^C) is included into (A^B)^C.
Same proof for inverse inclusion
then A^(B^C) =(A^B)^C
Second:
x in B U C if and only if x is in B or x is in C.
first inclusion:
if x in A ^ (B U C)
then x = vw where v in A and w in B or C
Then x is in A^B or A^C
then x is in (A ^ B) U ( A ^ C)
second inclusion
if x is in (A ^ B) U ( A ^ C)
then x = vw with v in A and w in B or x =vw with with v in A and w in
C
then since v is always is A
then x = vw where v in A and w in B or C
x in A ^ (B U C)
Therefore A ^ (B U C) = (A ^ B) U ( A ^ C)

Resources