pyeda method "to abstract syntax tree" - constraints

What I'm really trying to do is to convert boolean expressions to integer linear programming constraints. I'm trying to first convert the expressions into a CNF (using pyeda) and then from the CNF form the constraints (since this is pretty straight forward). However, I'm having trouble to understand the abstract syntax tree which the .to_ast() function is outputting. As an example, when running .to_ast() on the expression (~C1 | ~P1 | ~O1) & (~C1 | ~P1 | ~O2) the output is
('and', ('or', ('lit', -1), ('lit', -2), ('lit', -3)), ('or', ('lit', -1), ('lit', -2), ('lit', -4)))
It is pretty obvious that the - is the negation and the integer is representing one of the variables. Does anyone know if there is a mapping from the integer to the variable? Long description for short question...

Yes, the integer you are looking at is the 'uniqid' attribute on literals.
>>> from pyeda.inter import *
>>> C1, P1, O1, O2 = map(exprvar, "C1 P1 O1 O2".split())
>>> f = (~C1 | ~P1 | ~O1) & (~C1 | ~P1 | ~O2)
>>> f.to_ast()
('and',
('or', ('or', ('lit', -1), ('lit', -2)), ('lit', -3)),
('or', ('or', ('lit', -1), ('lit', -2)), ('lit', -4)))
>>> C1.uniqid, P1.uniqid, O1.uniqid, O2.uniqid
(1, 2, 3, 4)
>>> (~C1).uniqid, (~P1).uniqid, (~O1).uniqid, (~O2).uniqid
(-1, -2, -3, -4)
You can access the internal mapping directly if you want, but it requires some special knowledge:
>>> from pyeda.boolalg.expr import _LITS
>>> _LITS
{1: C1, 2: P1, 3: O1, 4: O2, -1: ~C1, -2: ~P1, -3: ~O1, -4: ~O2}

Related

Mapping array with Dataframe in Julia

I'd like to write a code to convert the array.
The base array is a kind of permutated array, which has arrays inside as components.
el = {[0, 0, 0], [1, 0, 0], [0, 0, 1] ... [4, 4, 3], [4, 3, 4], [3, 4, 4], [4, 4, 4]}
125-element Array{Array{Int64,1},1}:
em = {"[0, 0, 0]", "[1, 0, 0]", "[0, 0, 1]" ... "[4, 4, 3]", "[4, 3, 4]", "[3, 4, 4]", "[4, 4, 4]"}
125-element Array{String,1}:
It represents how many stepped forward groups have.
And there is a sort of dataframe
3×3 Named Array{String,2}.; I used NamedArrays to depict its row name.
OrderNA =
|row name |Start | 1st | 2nd | 3rd |Finish|
|:-------- |:----:|:----:|:----:|:----:| ----:|
| G1 | "Stt"| "W1" | "W2" | "W3" | "Fin"|
| G2 | "Stt"| "W2" | "W3" | "W1" | "Fin"|
| G3 | "Stt"| "W3" | "W1" | "W2" | "Fin"|
As you can see, It's a order table.
As mentioned, El's components represents location of groups.
And should be converted with the order table.
eg) [0,0,0] is location of [G1, G2, G3] => ["Stt","Stt","Stt"]
[1,0,0] => ["W1", "Stt", "Stt"]
[1,2,4] => ["W1", "W3", "Fin" ]
So I've struggled to convert it as described below, but I failed.
function trans(em)
for i in 1:length(em)
for j in 1:length(em[i])
#show em[i][j]
if em[i][3j-1] == '0'
replace(em[i][3j-1]) = "Stt"
elseif em[i][3j-1] == '1'
replace(em[i][j]) = OrderNA[j, 1]
elseif em[i][3j-1] == '2'
replace(em[i][3j-1]) = OrderNA[j, 2]
elseif em[i][3j-1] == '3'
replace(em[i][3j-1]) = OrderNA[j, 3]
else em[i][3j-1] == '4'
replace(em[i][3j-1]) = "Fin"
end
end
end
end
syntax: ""em[i][((3 * j) - 1)]" is not a valid function argument name around In[90]:9
I can't figure out what wrong is. How can I solve this? Thank you in advance!
You could do:
el_new = []
for (a,b,c) in el
push!(el_new, [OrderNA[1, a+1], OrderNA[2, b+1], OrderNA[3, c+1]])
end

Julia: Referencing struct composed of other struct definitions

I have a user-defined structure:
using Parameters
#with_kw struct TypeSingle
id::Int
x::Union{Int32, Missing} = missing
flag::Bool = true
end
#with_kw struct TypeAll
A = TypeSingle(id=01,x=0.1,flag=false)
B = TypeSingle(id=02)
# this continues on until
Z = TypeSingle(id=26,x=1.3)
end
I have some questions regarding operations that I would like to perform with TypeAll:
I'd like to refer to each entry, A.id, B.id etc.. in the composite TypeAll in a loop that runs from the lowest id to the highest.
Is there a way to extract the size of this type? i.e. how many A,B,...Z are there in total?
Would this be better suited to a vector of TypeA? In my actual code TypeAll isn't only composed of TypeA, but also includes TypeB, TypeC etc..
As long as your TypeAll is not going to be mutable, it looks a lot like a named tuple (NamedTuple) so why not use one instead of a TypeAll? e.g.
julia> t = (A = (01, 0.1, false), B = (02, missing, true), C = (26, 1.3, true))
(A = (1, 0.1, false), B = (2, missing, true), C = (26, 1.3, true))
julia> t[1]
(1, 0.1, false)
julia> length(t)
3
julia> sort(collect(t), lt = (x, y) -> x[1] < y[1])
3-element Vector{Tuple{Int64, Any, Bool}}:
(1, 0.1, 0)
(2, missing, 1)
(26, 1.3, 1)
If you want to have TypeAll mutable, I would use a vector of TypeSingle, instead of a named tuple.

SQLite how to convert BLOB byte to INT with built-in functions?

i have an sqlite database that contains a column with BLOB data.
these BLOB data are 4 byte width. i want to split the 4 bytes appart and convert each part to an integer value to calculate with it.
i found out, that i can use SUBSTR(val, start, length) to take the BLOB value appart. the result is still of type BLOB.
but how can i convert the BLOB/byte to an integer value?
is there a built-in function that can convert byte BLOB values to an integer?
or is there a way to convert a hex-string-value into an integer value, so i could play with HEX(val) or QUOTE(val)
CREATE TEMP TABLE IF NOT EXISTS test AS SELECT x'cafe1a7e' AS val;
SELECT (val)
, TYPEOF(val)
, HEX(val)
, QUOTE(val)
, TYPEOF(HEX(val))
, TYPEOF(QUOTE(val))
, CAST(val AS INT)
, CAST(HEX(val) AS INT)
, CAST(QUOTE(val) AS INT)
, SUBSTR(val, 1, 1)
, TYPEOF(SUBSTR(val, 1, 1))
, HEX(SUBSTR(val, 1, 1))
, HEX(SUBSTR(val, 2, 1))
, HEX(SUBSTR(val, 3, 2))
, val + val
, SUBSTR(val, 1, 1) + 1
, CAST(SUBSTR(val, 1, 1) AS INT)
FROM test;
DROP TABLE test;
You can convert one hex digit at a time using instr:
SELECT hex(b), n, printf("%04X", n)
FROM (SELECT b,
(instr("123456789ABCDEF", substr(hex(b), -1, 1)) << 0) |
(instr("123456789ABCDEF", substr(hex(b), -2, 1)) << 4) |
(instr("123456789ABCDEF", substr(hex(b), -3, 1)) << 8) |
(instr("123456789ABCDEF", substr(hex(b), -4, 1)) << 12) |
(instr("123456789ABCDEF", substr(hex(b), -5, 1)) << 16) |
(instr("123456789ABCDEF", substr(hex(b), -6, 1)) << 20) |
(instr("123456789ABCDEF", substr(hex(b), -7, 1)) << 24) |
(instr("123456789ABCDEF", substr(hex(b), -8, 1)) << 28) AS n
FROM (SELECT randomblob(4) AS b))
Example output:
D91F8E91|3642723985|D91F8E91
(Simplification of idea from [1].)
There is no built in function that I know of, so this is how I do it - if you know how many bytes you want to convert:
--creates table h2i with numbers 0 to 255 in hex and int
CREATE TEMP TABLE bits (bit INTEGER PRIMARY KEY);INSERT INTO bits VALUES (0);INSERT INTO bits VALUES (1);
CREATE TEMP TABLE h2i (h TEXT, i INT);
INSERT INTO h2i (h, i) SELECT printf('%02X',num),num FROM (SELECT b7.bit * 128 + b6.bit * 64 + b5.bit * 32 + b4.bit * 16 + b3.bit * 8 + b2.bit * 4 + b1.bit * 2 + b0.bit AS num FROM bits b7, bits b6, bits b5,bits b4, bits b3, bits b2, bits b1, bits b0) as nums;
SELECT
HEX(SUBSTR(val, 1, 1)),h2i0.i
,HEX(SUBSTR(val, 2, 1)),h2i1.i
,HEX(SUBSTR(val, 3, 2)),h2i2.i*256+h2i3.i
,HEX(SUBSTR(val, 1, 4)),h2i0.i*16777216+h2i1.i*65536+h2i2.i*256+h2i3.i
FROM test
JOIN h2i h2i0 ON h2i0.h=HEX(SUBSTR(val, 1, 1))
JOIN h2i h2i1 ON h2i1.h=HEX(SUBSTR(val, 2, 1))
JOIN h2i h2i2 ON h2i2.h=HEX(SUBSTR(val, 3, 1))
JOIN h2i h2i3 ON h2i3.h=HEX(SUBSTR(val, 4, 1))
;
#rayzinnz, thank you for the hint.
in the meantime i gave up.
i puzzled together a kind of a solution, but i never got it work to set the initial x'cafe1a7e' value from outside the WITH RECURSIVE construction.
WITH RECURSIVE fx(val_hex, val_int, iter) AS (
VALUES(HEX(x'cafe1a7e'), 0, 0)
UNION ALL
SELECT
SUBSTR(val_hex, 1, LENGTH(val_hex) - 1),
val_int + (
CASE SUBSTR(val_hex, -1)
WHEN '0' THEN 0
WHEN '1' THEN 1
WHEN '2' THEN 2
WHEN '3' THEN 3
WHEN '4' THEN 4
WHEN '5' THEN 5
WHEN '6' THEN 6
WHEN '7' THEN 7
WHEN '8' THEN 8
WHEN '9' THEN 9
WHEN 'A' THEN 10
WHEN 'B' THEN 11
WHEN 'C' THEN 12
WHEN 'D' THEN 13
WHEN 'E' THEN 14
WHEN 'F' THEN 15
ELSE 0
END << (iter * 4)
),
iter + 1
FROM fx
WHERE val_hex != ''
LIMIT 9
)
--SELECT * FROM fx
SELECT val_int FROM fx WHERE val_hex == ''
;
the BLOB value there is hardcoded.
maybe you find a way.

Does slice or index of chainer.Variable to get item in chainer has backward ability?

Does the following code chainer.Variable still have ability to hold graph and can backward (gradient flow) after slice(a[0,1] or index(a[0]):
>>> a = chainer.Variable(np.array([[1,2,3],[10,11,12]]))
>>> a
variable([[ 1, 2, 3],
[10, 11, 12]])
>>> a[0]
variable([1, 2, 3])
>>> a[0, 1]
variable([1])
Yes. Indexing of chainer.Variable supports backprop.

List of Nth prime numbers Prolog

I'm trying to learn prolog and now I'm trying to print a list of the Nth primenumber:
primes(N, N).
primes(N, F):-
prime(F),
write(F), nl,
NewF is F + 1,
primes(N, NewF).
primes(N):-
primes(N, 2).
Prime/1 checks wheter the given number is a prime.
The output for primes(10) will be 2, 3 where it should be 2, 3, 5, 7, because when the NewF after 3 (which will be 4) is not a prime. So it will also not execute the write(F) nor the recursive call. I wondered how I could fix this, so it will not write F when it's not a prime but still execute the part after that. Thanks in advance!
You could simply add the clause:
primes(N, F):-
\+prime(F), nl,
NewF is F + 1,
primes(N, NewF).
I know that this answer doesn't exactly respond to the OP question (my getPrimeList(N, L) create a list L with all prime number from zero to N; the OP ask for first N prime numbers) but... just for fun... I've tried to implement the Sieve of Eratosthenes.
getListDisp(Top, Val, []) :-
Val > Top.
getListDisp(Top, V0, [V0 | Tail]) :-
V0 =< Top,
V1 is V0+2,
getListDisp(Top, V1, Tail).
reduceList(_, _, [], []).
reduceList(Step, Exclude, [Exclude | Ti], Lo) :-
NextE is Exclude+Step,
reduceList(Step, NextE, Ti, Lo).
reduceList(Step, Exclude, [H | Ti], [H | To]) :-
Exclude > H,
reduceList(Step, Exclude, Ti, To).
reduceList(Step, Exclude, [H | Ti], [H | To]) :-
Exclude < H,
NextE is Exclude+Step,
reduceList(Step, NextE, Ti, To).
eratSieve([], []).
eratSieve([Prime | Ti], [Prime | To]) :-
Step is 2*Prime,
Exclude is Prime+Step,
reduceList(Step, Exclude, Ti, Lo),
eratSieve(Lo, To).
getPrimeList(Top, []) :-
Top < 2.
getPrimeList(Top, [2 | L]) :-
Top >= 2,
getListDisp(Top, 3, Ld),
eratSieve(Ld, L).
I repeat: not really an answer; just for fun (as the OP, I'm trying to learn Prolog).

Resources