I was looking for a solution to the below question, any help provided will be highly appreciated.
Generate a gray-scale image of size 256 x 256 in an array by using random integers in [0, 15]. Use 4 x 4 dither matrix to display the dithered image using binary intensity (i.e. Zero intensity or maximum intensity). Display the original image (the one generated using random numbers) and the dithered image.
The following Forth program for Gforth creates a random grayscale image with 15 levels, dithers it with a 4×4 index matrix into a bilevel image, copies both side by side into a 512×256 byte array, and writes that as PGM image to standard output so it can be piped into display from the ImageMagick package:
include random.fs
256 constant image-size
15 constant max-gray-value
create grayscale-image image-size dup * chars allot
create result-image image-size dup * 2* chars allot
create index-matrix 0 , 12 , 3 , 15 ,
8 , 4 , 11 , 7 ,
2 , 14 , 1 , 13 ,
10 , 6 , 9 , 5 ,
: randomly-fill-grayscale-image ( -- )
image-size dup * 0
?do
max-gray-value random grayscale-image i chars + c!
loop
;
: copy-grayscale-to-result-image { offset -- }
image-size 0
?do
grayscale-image i image-size * chars +
result-image i image-size 2* * offset + chars +
image-size
move
loop
;
: dither-grayscale-image ( -- )
image-size 0
?do
image-size 0
?do
grayscale-image j image-size * i + chars + dup c#
index-matrix j 4 mod 4 * i 4 mod + cells + #
< if 0 else max-gray-value then
swap c!
loop
loop
;
: print-pgm ( -- )
." P5" cr
image-size dup 2* 2dup . . max-gray-value . cr
* result-image swap type
;
: main ( -- )
utime drop seed !
randomly-fill-grayscale-image
0 copy-grayscale-to-result-image
dither-grayscale-image
image-size copy-grayscale-to-result-image
print-pgm
;
main bye
This can be called as:
$ gforth dither.fs | display
If display is not available the image can be redirected into a file and viewed with another image viewer or editor:
$ gforth dither.fs > dither_image.pgm
The resulting image looks like this:
Related
I'm having trouble converting an index number into its respective column/row. The table goes like this
The graph scales in each dimension. Each square is surrounded by one blank space. I need to turn the number of the square into the x/y coordinates
I've figured out the column, but the row is still evading me.
This is what i have now:
#define IDtoX(n, w) ((2*(n%w))+1)
#define IDtoY(n, h) ((2*(n/h))+1)
IDtoX works as intended. IDtoY does not.
outputs should be as following.:
grid of width 7 and height 5:
n y
0 3
1 3
2 3
3 1
4 1
5 1
grid of width 9 and height 7:
0 5
1 5
2 5
3 5
4 3
5 3
6 3
7 3
8 1
9 1
10 1
11 1
The main reason why you are failing with your function IDtoY(n, h) is that the result also depends on the value of w. Therefore you must change your signature to something like IDtoY(n, w, h). To see this, try drawing more arrays with the same hs but varying ws and you will see that the ids will also change for each n. You were fooled by your successful function for IDtoX which does indeed not depend on h but only on n and w. Now, if your ids began at zero at the top, they would not depend on w, but as you drew the array, they do.
I found multiple formulae that work, but none of them are pretty. Here are two--if you do not like them, you could find some equivalent formulae.
#define IDtoY(n, w, h) (h - 2 - 2 * n // (w - 1) * 2)
or perhaps
#define IDtoY(n, w, h) (h - 2 - n // (w // 2) * 2)
where the // operator is integer division. You do not state which computing environment you are using--the simple / operator may work for you.
I am trying to fit 3 numbers inside 1 number.But numbers will be only between 0 and 11.So their (base) is 12.For example i have 7,5,2 numbers.I come up with something like this:
Three numbers into One number :
7x12=84
84x5=420
420+2=422
Now getting back Three numbers from One number :
422 MOD 12 = 2 (the third number)
422 - 2 = 420
420 / 12 = 35
And i understanded that 35 is multiplication of first and the second number (i.e 7 and 5)
And now i cant get that 7 and 5 anyone knows how could i ???
(I started typing this answer before the other one got posted, but this one is more specific to Arduino then the other one, so I'm leaving it)
The code
You can use bit shifting to get multiple small numbers into one big number, in code it would look like this:
int a, b, c;
//putting then together
int big = (a << 8) + (b << 4) + c;
//separating them again
a = (big >> 8) & 15;
b = (big >> 4) & 15;
c = big & 15;
This code only works when a, b and c are all in the range [0, 15] witch appears to be enough for you case.
How it works
The >> and << operators are the bitshift operators, in short a << n shifts every bit in a by n places to the left, this is equivalent to multiplying by 2^n. Similarly, a >> n shifts to to the right. An example:
11 << 3 == 120 //0000 1011 -> 0101 1000
The & operator performs a bitwise and on the two operands:
6 & 5 == 4 // 0110
// & 0101
//-> 0100
These two operators are combined to "pack" and "unpack" the three numbers. For the packing every small number is shifted a bit to the left and they are all added together. This is how the bits of big now look (there are 16 of them because ints in Arduino are 16 bits wide):
0000aaaabbbbcccc
When unpacking, the bits are shifted to the right again, and they are bitwise anded together with 15 to filter out any excess bits. This is what that last operation looks like to get b out again:
00000000aaaabbbb //big shifted 4 bits to the right
& 0000000000001111 //anded together with 15
-> 000000000000bbbb //gives the original number b
All is working exactly like in base 10 (or 16). Here after your corrected example.
Three numbers into One number :
7x12^2=1008
5*12^1=60
2*12^0=2
1008+60+2=1070
Now getting back Three numbers from One number :
1070 MOD 12 = 2 (the third number)
1070/12 = 89 (integer division) => 89 MOD 12 = 5
89 / 12 = 7
Note also that the maximum value will be 11*12*12+11*12+11=1727.
If this is really programming related, you will be using 16bits instead of 3*8 bits so sparing one byte. An easyer method not using base 12 would be fit each number into half a byte (better code efficiency and same transmission length):
7<<(4+4) + 5<<4 + 2 = 1874
1874 & 0x000F = 2
1874>>4 & 0x000F = 5
1874>>8 & 0x0F = 7
Because MOD(12) and division by 12 is much less efficient than working with powers of 2
you can use the principle of the positional notation to change from one or the other in any base
Treat yours numbers (n0,n1,...,nm) as a digit of a big number in the base B of your choosing so the new number is
N = n0*B^0 + n1*B^1 + ... + nm*B^m
to revert the process is also simple, while your number is greater than 0 find its modulo in respect to the base to get to get the first digit, then subtracts that digit and divide for the base, repeat until finish while saving each digit along the way
digit_list = []
while N > 0 do:
d = N mod B
N = (N - d) / B
digit_list.append( d )
then if N is N = n0*B^0 + n1*B^1 + ... + nm*B^m doing N mod B give you n0, then subtract it leaving you with n1*B^1 + ... + nm*B^m and divide by B to reduce the exponents of all B and that is the new N, N = n1*B^0 + ... + nm*B^(m-1) repetition of that give you all the digit you start with
here is a working example in python
def compact_num( num_list, base=12 ):
return sum( n*pow(base,i) for i,n in enumerate(num_list) )
def decompact_num( n, base=12):
if n==0:
return [0]
result = []
while n:
n,d = divmod(n,base)
result.append(d)
return result
example
>>> compact_num([2,5,7])
1070
>>> decompact_num(1070)
[2, 5, 7]
>>> compact_num([10,2],16)
42
>>> decompact_num(42,16)
[10, 2]
>>>
I want to write in Maple Taylor series for cosinus function. Here's my code:
better_cos := proc (x) options operator, arrow; sum((-1)^n*x^(2*n)/factorial(2*n), n = 0 .. 20) end proc;
better_cos(0) returns 0 instead of 1 (cos(0) == 1). It's probably because x^(2*n) return always 0 instead of 1. For example:
fun_sum := proc (x) options operator, arrow; sum(x^(2*n), n = 0 .. 0) end proc
return 0 for x == 1.
It's weird because 0^0 returns 1. Do you have any idea how can I correctly implement taylor series for cosinus?
You should be able to get what you want by using add instead of sum in your better_cos operator.
Using add is often more appropriate for adding up a finite number of terms of a numeric sequence, and also note that add has Maple's so-called special evaluation rules.
If you intend to take the sum of a fixed number of terms (ie. n from 0 to 20) then you should not write a procedure that computes the factorials for each input argument (ie. for each value of x). Instead, produce the truncated series just once, and then use unapply to produce an operator. This approach also happens to deal with your original problem, since the x^0 term becomes 1 because the symbol x is used.
You could also rearrange the polynomial (truncated series) so that it is in Horner form, to try and minimize arithmetic steps when evaluating subsequently at various numeric values of x.
For example, using 5 terms for brevity instead of 20 as you had it,
convert(add((-1)^n*x^(2*n)/factorial(2*n), n = 0 .. 5),horner);
/ 1 /1 / 1 / 1 1 2\ 2\ 2\ 2\ 2
1 + |- - + |-- + |- --- + |----- - ------- x | x | x | x | x
\ 2 \24 \ 720 \40320 3628800 / / / /
bc := unapply(%,x):
You can now apply the procedure bc as you wish, either with symbolic or numeric arguments.
expand(bc(x));
1 2 1 4 1 6 1 8 1 10
1 - - x + -- x - --- x + ----- x - ------- x
2 24 720 40320 3628800
bc(0);
1
bc(1.2);
0.3623577360
If you prefer to have your procedure better_cos take a pair of arguments, so that the number of terms is variable, then you could still consider using add to deal with your original problem. eg,
bc := (x,N)->add((-1)^n*x^(2*n)/(2*n)!, n = 0 .. N):
I suppose that this is a homework assignment, and that you realize that you could also use the existing system commands taylor or series to get the same results, ie.
convert(series(cos(x),x=0,10),polynom);
1 2 1 4 1 6 1 8
1 - - x + -- x - --- x + ----- x
2 24 720 40320
convert(taylor(cos(x),x=0,10),polynom);
1 2 1 4 1 6 1 8
1 - - x + -- x - --- x + ----- x
2 24 720 40320
Here's the Taylor series definition:
Don't start the loop with zero; initialize with one and start at two.
Factorial is inefficient, too.
I wasn't starting to understand linear recursion and then I thought I practice up on sorting algorithms and then quick sort was where I had trouble with recursion. So I decided to work with a simpler eg, a binary sum that I found online. I understand that recursion, like all function calls, are executed one # a time and not at the same time (which is what multi-threading does but is not of my concern when tracing). So I need to execute all of recursive call A BEFORE recursive call B, but I get lost in the mix. Does anyone mind tracing it completely. The e.g. I have used of size, n = 9 where elems are all 1's to keep it simple.
/**
* Sums an integer array using binary recursion.
* #param arr, an integer array
* #param i starting index
* #param n size of the array
* floor(x) is largest integer <= x
* ceil(x) is smallest integer >= x
*/
public int binarySum(int arr[], int i, int n) {
if (n == 1)
return arr[i];
return binarySum(arr, i, ceil(n/2)) + binarySum(arr,i + ceil(n/2), floor(n/2));
}
What I personally do is start with an array of size 2. There are two elements.
return binarySum(arr, i, ceil(n/2)) + binarySum(arr,i + ceil(n/2), floor(n/2)) will do nothing but split the array into 2 and add the two elements. - case 1
now, this trivial starting point will be the lowest level of the recursion for the higher cases.
now increase n = 4. the array is split into 2 : indices from 0-2 and 2-4.
now the 2 elements inside indices 0 to 2 are added in case 1 and so are the 2 elements added in indices 2-4.
Now these two results are added in this case.
Now we are able to make more sense of the recursion technique, some times understanding bottom up is easier as in this case!
Now to your question consider an array of 9 elements : 1 2 3 4 5 6 7 8 9
n = 9 => ceil(9/2) = 5, floor(9/2) = 4
Now first call (top call) of binarySum(array, 0, 9)
now n = size is not 1
hence the recursive call....
return binarySum(array, 0, 5) + binarySum(array, 5, 4)
now the first binarySum(array, 0 ,5) operates on the first 5 elements of the array and the second binarySum(array,5,4) operates on the last 4 elements of the array
hence the array division can be seen like this: 1 2 3 4 5 | 6 7 8 9
The first function finds the sum of the elements: 1 2 3 4 5
and the second function finds the sum of the elements 6 7 8 9
and these two are added together and returned as the answer to the top call!
now how does this 1+2+3+4+5 and 6+7+8+9 work? we recurse again....
so the tracing will look like
1 2 3 4 5 | 6 7 8 9
1 2 3 | 4 5 6 7 | 8 9
1 2 | 3 4 | 5 6 | 7 8 | 9
[1 | 2]___[3]___[4 5]___[6 7]___[8 9]
Till this we are fine..we are just calling the functions recursively.
But now, we hit the base case!
if (n == 1)
return arr[i];
[1 + 2]____[3]____[4 + 5]____[6 + 7]____[8 + 9]
[3 + 3] ____ [9] ____[13 + 17]
[6 + 9] [30]
[15 + 30]
[45]
which is the sum.
So for understanding see what is done to the major instance of the problem and you can be sure that the same thing is going to happen to the minor instance of the problem.
This example explains binary sum with trace in java
the trace is based on index of array , where 0 - is yours starting index and 8 is length of the array
int sum(int* arr, int p, int k) {
if (p == k)
return arr[k];
int s = (p + k) / 2;
return sum(arr, p, s) + sum(arr, s + 1, k);
}
I'm trying to understand the binary operators in C# or in general, in particular ^ - exclusive or.
For example:
Given an array of positive integers. All numbers occur even number of times except one number which occurs odd number of times. Find the number in O(n) time and constant space.
This can be done with ^ as follows: Do bitwise XOR of all the elements. Finally we get the number which has odd occurrences.
How does it work?
When I do:
int res = 2 ^ 3;
res = 1;
int res = 2 ^ 5;
res = 7;
int res = 2 ^ 10;
res = 8;
What's actually happening? What are the other bit magics? Any reference I can look up and learn more about them?
I know this is a rather old post but I wanted simplify the answer since I stumbled upon it while looking for something else.
XOR (eXclusive OR/either or), can be translated simply as toggle on/off.
Which will either exclude (if exists) or include (if nonexistent) the specified bits.
Using 4 bits (1111) we get 16 possible results from 0-15:
decimal | binary | bits (expanded)
0 | 0000 | 0
1 | 0001 | 1
2 | 0010 | 2
3 | 0011 | (1+2)
4 | 0100 | 4
5 | 0101 | (1+4)
6 | 0110 | (2+4)
7 | 0111 | (1+2+4)
8 | 1000 | 8
9 | 1001 | (1+8)
10 | 1010 | (2+8)
11 | 1011 | (1+2+8)
12 | 1100 | (4+8)
13 | 1101 | (1+4+8)
14 | 1110 | (2+4+8)
15 | 1111 | (1+2+4+8)
The decimal value to the left of the binary value, is the numeric value used in XOR and other bitwise operations, that represents the total value of associated bits. See Computer Number Format and Binary Number - Decimal for more details.
For example: 0011 are bits 1 and 2 as on, leaving bits 4 and 8 as off. Which is represented as the decimal value of 3 to signify the bits that are on, and displayed in an expanded form as 1+2.
As for what's going on with the logic behind XOR here are some examples
From the original post
2^3 = 1
2 is a member of 1+2 (3) remove 2 = 1
2^5 = 7
2 is not a member of 1+4 (5) add 2 = 1+2+4 (7)
2^10 = 8
2 is a member of 2+8 (10) remove 2 = 8
Further examples
1^3 = 2
1 is a member of 1+2 (3) remove 1 = 2
4^5 = 1
4 is a member of 1+4 (5) remove 4 = 1
4^4 = 0
4 is a member of itself remove 4 = 0
1^2^3 = 0Logic: ((1^2)^(1+2))
(1^2) 1 is not a member of 2 add 2 = 1+2 (3)
(3^3) 1 and 2 are members of 1+2 (3) remove 1+2 (3) = 0
1^1^0^1 = 1 Logic: (((1^1)^0)^1)
(1^1) 1 is a member of 1 remove 1 = 0
(0^0) 0 is a member of 0 remove 0 = 0
(0^1) 0 is not a member of 1 add 1 = 1
1^8^4 = 13 Logic: ((1^8)^4)
(1^8) 1 is not a member of 8 add 1 = 1+8 (9)
(9^4) 1 and 8 are not members of 4 add 1+8 = 1+4+8 (13)
4^13^10 = 3 Logic: ((4^(1+4+8))^(2+8))
(4^13) 4 is a member of 1+4+8 (13) remove 4 = 1+8 (9)
(9^10) 8 is a member of 2+8 (10) remove 8 = 2
1 is not a member of 2+8 (10) add 1 = 1+2 (3)
4^10^13 = 3 Logic: ((4^(2+8))^(1+4+8))
(4^10) 4 is not a member of 2+8 (10) add 4 = 2+4+8 (14)
(14^13) 4 and 8 are members of 1+4+8 (13) remove 4+8 = 1
2 is not a member of 1+4+8 (13) add 2 = 1+2 (3)
To see how it works, first you need to write both operands in binary, because bitwise operations work on individual bits.
Then you can apply the truth table for your particular operator. It acts on each pair of bits having the same position in the two operands (the same place value). So the leftmost bit (MSB) of A is combined with the MSB of B to produce the MSB of the result.
Example: 2^10:
0010 2
XOR 1010 8 + 2
----
1 xor(0, 1)
0 xor(0, 0)
0 xor(1, 1)
0 xor(0, 0)
----
= 1000 8
And the result is 8.
The other way to show this is to use the algebra of XOR; you do not need to know anything about individual bits.
For any numbers x, y, z:
XOR is commutative: x ^ y == y ^ x
XOR is associative: x ^ (y ^ z) == (x ^ y) ^ z
The identity is 0: x ^ 0 == x
Every element is its own inverse: x ^ x == 0
Given this, it is easy to prove the result stated. Consider a sequence:
a ^ b ^ c ^ d ...
Since XOR is commutative and associative, the order does not matter. So sort the elements.
Now any adjacent identical elements x ^ x can be replaced with 0 (self-inverse property). And any 0 can be removed (because it is the identity).
Repeat as long as possible. Any number that appears an even number of times has an integral number of pairs, so they all become 0 and disappear.
Eventually you are left with just one element, which is the one appearing an odd number of times. Every time it appears twice, those two disappear. Eventually you are left with one occurrence.
[update]
Note that this proof only requires certain assumptions about the operation. Specifically, suppose a set S with an operator . has the following properties:
Assocativity: x . (y . z) = (x . y) . z for any x, y, and z in S.
Identity: There exists a single element e such that e . x = x . e = x for all x in S.
Closure: For any x and y in S, x . y is also in S.
Self-inverse: For any x in S, x . x = e
As it turns out, we need not assume commutativity; we can prove it:
(x . y) . (x . y) = e (by self-inverse)
x . (y . x) . y = e (by associativity)
x . x . (y . x) . y . y = x . e . y (multiply both sides by x on the left and y on the right)
y . x = x . y (because x . x = y . y = e and the e's go away)
Now, I said that "you do not need to know anything about individual bits". I was thinking that any group satisfying these properties would be enough, and that such a group need not necessarily be isomorphic to the integers under XOR.
But #Steve Jessup proved me wrong in the comments. If you define scalar multiplication by {0,1} as:
0 * x = 0
1 * x = x
...then this structure satisfies all of the axioms of a vector space over the integers mod 2.
Thus any such structure is isomorphic to a set of vectors of bits under component-wise XOR.
This is based on the simple fact that XOR of a number with itself results Zero.
and XOR of a number with 0 results the number itself.
So, if we have an array = {5,8,12,5,12}.
5 is occurring 2 times.
8 is occurring 1 times.
12 is occurring 2 times.
We have to find the number occurring odd number of times. Clearly, 8 is the number.
We start with res=0 and XOR with all the elements of the array.
int res=0;
for(int i:array)
res = res ^ i;
1st Iteration: res = 0^5 = 5
2nd Iteration: res = 5^8
3rd Iteration: res = 5^8^12
4th Iteration: res = 5^8^12^5 = 0^8^12 = 8^12
5th Iteration: res = 8^12^12 = 8^0 = 8
The bitwise operators treat the bits inside an integer value as a tiny array of bits. Each of those bits is like a tiny bool value. When you use the bitwise exclusive or operator, one interpretation of what the operator does is:
for each bit in the first value, toggle the bit if the corresponding bit in the second value is set
The net effect is that a single bit starts out false and if the total number of "toggles" is even, it will still be false at the end. If the total number of "toggles" is odd, it will be true at the end.
Just think "tiny array of boolean values" and it will start to make sense.
The definition of the XOR (exclusive OR) operator, over bits, is that:
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0
One of the ways to imagine it, is to say that the "1" on the right side changes the bit from the left side, and 0 on the right side doesn't change the bit on the left side. However, XOR is commutative, so the same is true if the sides are reversed.
As any number can be represented in binary form, any two numbers can be XOR-ed together.
To prove it being commutative, you can simply look at its definition, and see that for every combination of bits on either side, the result is the same if the sides are changed. To prove it being associative, you can simply run through all possible combinations of having 3 bits being XOR-ed to each other, and the result will stay the same no matter what the order is.
Now, as we proved the above, let's see what happens if we XOR the same number at itself. Since the operation works on individual bits, we can test it on just two numbers: 0 and 1.
0 XOR 0 = 0
1 XOR 1 = 0
So, if you XOR a number onto itself, you always get 0 (believe it or not, but that property of XOR has been used by compilers, when a 0 needs to be loaded into a CPU register. It's faster to perform a bit operation than to explicitly push 0 into a register. The compiler will just produce assembly code to XOR a register onto itself).
Now, if X XOR X is 0, and XOR is associative, and you need to find out what number hasn't repeated in a sequence of numbers where all other numbers have been repeated two (or any other odd number of times). If we had the repeating numbers together, they will XOR to 0. Anything that is XOR-ed with 0 will remain itself. So, out of XOR-ing such a sequence, you will end up being left with a number that doesn't repeat (or repeats an even number of times).
This has a lot of samples of various functionalities done by bit fiddling. Some of can be quite complex so beware.
What you need to do to understand the bit operations is, at least, this:
the input data, in binary form
a truth table that tells you how to "mix" the inputs to form the result
For XOR, the truth table is simple:
1^1 = 0
1^0 = 1
0^1 = 1
0^0 = 0
To obtain bit n in the result you apply the rule to bits n in the first and second inputs.
If you try to calculate 1^1^0^1 or any other combination, you will discover that the result is 1 if there is an odd number of 1's and 0 otherwise. You will also discover that any number XOR'ed with itself is 0 and that is doesn't matter in what order you do the calculations, e.g. 1^1^(0^1) = 1^(1^0)^1.
This means that when you XOR all the numbers in your list, the ones which are duplicates (or present an even number of times) will XOR to 0 and you will be left with just the one which is present an odd number of times.
As it is obvious from the name(bitwise), it operates between bits.
Let's see how it works,
for example, we have two numbers a=3 and b=4,
the binary representation of 3 is 011 and of 4 is 100, so basically xor of the same bits is 0 and for opposite bits, it is 1.
In the given example 3^4, where "^" is a xor symbol, will give us 111 whose decimal value will be 7.
for another example, if you've given an array in which every element occurs twice except one element & you've to find that element.
How can you do that? simple xor of the same numbers will always be 0 and the number which occur exactly once will be your output. because the output of any one number with 0 will be the same name number because the number will have set bits which zero don't have.