Design an OR gate only using Demultiplexers - digital-design

Show the OR gate operation by only using de-multiplexers.
I know it is quite impractical implementation but these types of questions are being asked in placement tests.
http://i.stack.imgur.com/mQAZD.png
check out OR gate truth table if you want from the above link

here is or gate implementation using demux
take 1*2 demux : input as 1
selection input as A
then at 0th output of the demux: Not(A.1) = ABar
similar way BBar will get from B.
now take another 1*4 demux: input as 1
selection inputs :- ABar & BBar
then at 0th output of the demux: Not(ABar.BBar.1) = A+B

If you feed A&B as the inputs to a 1-4 demux, then the output that is 1 when A=0 & B=0 will be the negation of your desired OR; feed that into a 1-2 demux, and then the output that is the opposite of your input will be your OR. (Sorry I can't be more precise; not sure what naming convention you are using for your demises.)

Related

How to create a logic circuit that takes 3 inputs and returns 1 only when all the inputs are equal?

I'm struggling with creating a logic circuit that could implement the following truth table:
(A'B'C') + (ABC)
You might be interested to check out Karnaugh map.

Perl 6 calculate the average of an int array at once using reduce

I'm trying to calculate the average of an integer array using the reduce function in one step. I can't do this:
say (reduce {($^a + $^b)}, <1 2 3>) / <1 2 3>.elems;
because it calculates the average in 2 separate pieces.
I need to do it like:
say reduce {($^a + $^b) / .elems}, <1 2 3>;
but it doesn't work of course.
How to do it in one step? (Using map or some other function is welcomed.)
TL;DR This answer starts with an idiomatic way to write equivalent code before discussing P6 flavored "tacit" programming and increasing brevity. I've also added "bonus" footnotes about the hyperoperation Håkon++ used in their first comment on your question.5
Perhaps not what you want, but an initial idiomatic solution
We'll start with a simple solution.1
P6 has built in routines2 that do what you're asking. Here's a way to do it using built in subs:
say { sum($_) / elems($_) }(<1 2 3>); # 2
And here it is using corresponding3 methods:
say { .sum / .elems }(<1 2 3>); # 2
What about "functional programming"?
First, let's replace .sum with an explicit reduction:
.reduce(&[+]) / .elems
When & is used at the start of an expression in P6 you know the expression refers to a Callable as a first class citizen.
A longhand way to refer to the infix + operator as a function value is &infix:<+>. The shorthand way is &[+].
As you clearly know, the reduce routine takes a binary operation as an argument and applies it to a list of values. In method form (invocant.reduce) the "invocant" is the list.
The above code calls two methods -- .reduce and .elems -- that have no explicit invocant. This is a form of "tacit" programming; methods written this way implicitly (or "tacitly") use $_ (aka "the topic" or simply "it") as their invocant.
Topicalizing (explicitly establishing what "it" is)
given binds a single value to $_ (aka "it") for a single statement or block.
(That's all given does. Many other keywords also topicalize but do something else too. For example, for binds a series of values to $_, not just one.)
Thus you could write:
say .reduce(&[+]) / .elems given <1 2 3>; # 2
Or:
$_ = <1 2 3>;
say .reduce(&[+]) / .elems; # 2
But given that your focus is FP, there's another way that you should know.
Blocks of code and "it"
First, wrap the code in a block:
{ .reduce(&[+]) / .elems }
The above is a Block, and thus a lambda. Lambdas without a signature get a default signature that accepts one optional argument.
Now we could again use given, for example:
say do { .reduce(&[+]) / .elems } given <1 2 3>; # 2
But we can also just use ordinary function call syntax:
say { .reduce(&[+]) / .elems }(<1 2 3>)
Because a postfix (...) calls the Callable on its left, and because in the above case one argument is passed in the parens to a block that expects one argument, the net result is the same as the do4 and the given in the prior line of code.
Brevity with built ins
Here's another way to write it:
<1 2 3>.&{.sum/.elems}.say; #2
This calls a block as if it were a method. Imo that's still eminently readable, especially if you know P6 basics.
Or you can start to get silly:
<1 2 3>.&{.sum/$_}.say; #2
This is still readable if you know P6. The / is a numeric (division) operator. Numeric operators coerce their operands to be numeric. In the above $_ is bound to <1 2 3> which is a list. And in Perls, a collection in numeric context is its number of elements.
Changing P6 to suit you
So far I've stuck with standard P6.
You can of course write subs or methods and name them using any Unicode letters. If you want single letter aliases for sum and elems then go ahead:
my (&s, &e) = &sum, &elems;
But you can also extend or change the language as you see fit. For example, you can create user defined operators:
#| LHS ⊛ RHS.
#| LHS is an arbitrary list of input values.
#| RHS is a list of reducer function, then functions to be reduced.
sub infix:<⊛> (#lhs, *#rhs (&reducer, *#fns where *.all ~~ Callable)) {
reduce &reducer, #fns».(#lhs)
}
say <1 2 3> ⊛ (&[/], &sum, &elems); # 2
I won't bother to explain this for now. (Feel free to ask questions in the comments.) My point is simply to highlight that you can introduce arbitrary (prefix, infix, circumfix, etc.) operators.
And if custom operators aren't enough you can change any of the rest of the syntax. cf "braid".
Footnotes
1 This is how I would normally write code to do the computation asked for in the question. #timotimo++'s comment nudged me to alter my presentation to start with that, and only then shift gears to focus on a more FPish solution.
2 In P6 all built in functions are referred to by the generic term "routine" and are instances of a sub-class of Routine -- typically a Sub or Method.
3 Not all built in sub routines have correspondingly named method routines. And vice-versa. Conversely, sometimes there are correspondingly named routines but they don't work exactly the same way (with the most common difference being whether or not the first argument to the sub is the same as the "invocant" in the method form.) In addition, you can call a subroutine as if it were a method using the syntax .&foo for a named Sub or .&{ ... } for an anonymous Block, or call a method foo in a way that looks rather like a subroutine call using the syntax foo invocant: or foo invocant: arg2, arg3 if it has arguments beyond the invocant.
4 If a block is used where it should obviously be invoked then it is. If it's not invoked then you can use an explicit do statement prefix to invoke it.
5 Håkon's first comment on your question used "hyperoperation". With just one easy to recognize and remember "metaop" (for unary operations) or a pair of them (for binary operations), hyperoperations distribute an operation to all the "leaves"6 of a data structure (for an unary) or create a new one based on pairing up the "leaves" of a pair of data structures (for binary operations). NB. Hyperoperations are done in parallel7.
6 What is a "leaf" for a hyperoperation is determined by a combination of the operation being applied (see the is nodal trait) and whether a particular element is Iterable.
7 Hyperoperation is applied in parallel, at least semantically. Hyperoperation assumes8 that the operations on the "leaves" have no mutually interfering side-effects -- that is to say, that any side effect when applying the operation to one "leaf" can safely be ignored in respect to applying the operation to any another "leaf".
8 By using a hyperoperation the developer is declaring that the assumption of no meaningful side-effects is correct. The compiler will act on the basis it is, but will not check that it is true. In the safety sense it's like a loop with a condition. The compiler will follow the dev's instructions, even if the result is an infinite loop.
Here is an example using given and the reduction meta operator:
given <1 2 3> { say ([+] $_)/$_.elems } ;

How to represent Graham's number in programming language (specifically python)?

I'm new to programming, and I would like to know how to represent Graham's number in python (the language I decided to learn as my first). I can show someone in real life on paper how to somewhat get to Graham's number, but for anyone who doesn't know what it is, here it is.
So imagine you have a 3. Now I can represent powers of 3 using ^ (up arrow). So 3^3 = 27. 3^^3 = 3^3^3 = 3^(3^3) = 3^27 = 7625597484987. 3^^^3 = 3^7625597484987 = scary big number. Now we can see that every time you add an up arrow, the number gets massively big.
Now imagine 3^^^^3 = 3^(3^7625597484987)... this number is stupid big.
So now that we have 3^(3^7625597484987), we will call this G1. That's 3^^^^3 (4 arrows in between the 3's).
Now G2 is basically 3 with G1 number of arrows in between them. Whatever number 3^(3^7625597484987) is, is the number of arrows in between the 2 3's of G2. So this number has 3^(3^7625597484987) number of arrows.
Now G3 has G2 number of arrows in between the 2 3's. We can see that this number (G3) is just huge. Stupidly huge. So each G has the number of arrows represented by the G before the current G.
Do this over and over again, placing and previous number of "G" arrows into the next number. Do this until G64, THAT'S Graham's number.
Now here is my question. How do you represent "the number of a certain thing (in this case arrows) is the number of arrows in the next G"? How do you represent the number of something "goes into" the next string in programming. If I can't do it in python, please post which languages this would be possible in. Thanks for any responses.
This is an example of a recursively defined number, which can be expressed using a base case and a recursive case. These two cases capture the idea of "every output is calculated from a previous output following the rule '____,' except for the first one which is ____."
A simple canonical one would be the factorial, which uses the base case 1! = 1 (or sometimes 0! = 1) and the recursive case n! = n * (n-1)! (when n>1). Or, in a more code-like fashion, f(1) = 1 and f(n) = n * f(n-1). For more information on how to write a function that behaves like this, find a good resource to learn about recursion.
The G1, G2, G3, etc. in the definition of Graham's number are like these calls to the previous results. You need a function to represent an "arrow" so you can call an arbitrary number of them. So to translate your mental model into code:
"[...] we will call this [3^^^^3] G1. [...] 'The number of a certain thing (in this case arrows) is the number of arrows in the next G' [...]"
Let the aforementioned function be arrow(n) = 3^^^...^3 (where there are n arrows). Then to get Graham's number, your base case will be G(1) = arrow(4), and your recursive case will be G(n) = arrow(G(n-1)) (when n>1).
Note that arrow(n) will also have a recursive definition (see Knuth's up-arrow notation), as you showed but didn't describe, by calculating each output from the previous (emphasis added):
3^3 = 27.
3^^3 = 3^3^3 = 3^(3^3) = 3^27 = 7625597484987.
3^^^3 = 3^7625597484987 = scary big number.
You might describe this as "every output [arrow(n)] is 3 to the power of the previous output, except for the first one [arrow(1)] which is just 3." I'll leave the translation from description to definition as an exercise.
(Also, I hope you didn't actually want to represent Graham's number, but rather just its calculation, since it's too big even for Python or any computer for that matter.)
class GrahamsNumber(object):
pass
G = GrahamsNumber()
For most purposes, this is as good of a representation of Graham's number as any other one. Sure, you can't do anything useful with G, but for the most part you can't do anything useful with Graham's number either, so it accurately reflects realistic use cases.
If you do have some specific use cases (and they're feasible), then a representation can be tailored to allow those use cases; but we can't guess what you want to be able to do with G, you have to spell it out explicitly.
For fun, I implemented hyperoperators in python as the module hyperop and one of my examples is Graham's number:
def GrahamsNumber():
# This may take awhile...
g = 4
for n in range(1,64+1):
g = hyperop(g+2)(3,3)
return g
The magic is in the recursion, which loosely stated looks like H[n](x,y) = reduce(lambda x,y: H[n-1](y,x), [a,]*b).
To answer your question, this function (in theory) will calculate the number in question. Since there is no way this program will ever finish before the heat death of the Universe, you gain more of an understanding from the "function" itself than the actual value.

How does is_integer(X) procedure work?

I read in Mellish, Clocksin book about Prolog and got to this:
is_integer(0).
is_integer(X) :- is_integer(Y), X is Y + 1.
with the query ?- is_integer(X). the zero output is easy but how does it get 1, 2, 3, 4...
I know it is not easy to explain writing only but I will appreciate any attempt.
After the 1-st result X=0 I hit ; then the query becomes is_integer(0) or is still is_integer(X)?
It's long time I search for a good explanation to this issue. Thanks in advance.
This strikes to the heart of what makes Prolog so interesting and difficult. You're definitely not stupid, it's just extremely different.
There are two rules here. The existence of alternatives causes choice points to be created. You can think of the choice point as a moment when Prolog saw an alternate way of proceeding with the computation. Prolog always tries rules in the order they appear in the database, which will correspond to the order they appear in the source file. So when you issue the query is_integer(X), the first rule matches and unifies X with 0. This is your first result.
When you press ';' you are telling Prolog to fail, that this answer is not acceptable, which triggers backtracking. The only thing for Prolog to do is try entering the other rule, which begins is_integer(Y). Y is a new variable; it may or may not wind up instantiated to the same value as X, so far you haven't seen any reason why that wouldn't be the case.
This call, is_integer(Y) essentially duplicates the computation that's been attempted so far. It will enter the first rule, is_integer(0) and try that. This rule will succeed, Y will be unified with 0 and then X will be unified with Y+1, and the rule will exit having unified X with 1.
When you press ';' again, Prolog will back up to the nearest choice point. This time, the nearest choice point is the call is_integer(Y) within the second rule for is_integer/1. So the depth of the call stack is greater, but we haven't left the second rule. Indeed, each subsequent result will be had by backtracking from the first to the second rule at this location in the previous location's activation of the second rule. I doubt very seriously a verbal explanation like the preceeding is going to help, so please look at this trashy ASCII art of how the call tree is evolving like this:
1 2 2
/ \
1 2
/
1
^ ^ ^
| | |
0 | |
1+0 |
1+(1+0)
where the numbers are indicating which rule is activated and the level is indicating the depth of the call stack. The next several steps will evolve like this:
2 2
\ \
2 2
\ \
2 2
/ \
1 2
/
1
^ ^
| |
1+(1+(1+0)) |
= 3 1+(1+(1+(1+0)))
= 4
Notice that we always produce a value by increasing the stack depth by 1 and reaching the first rule.
The answer of Daniel is very good, I just want to offer another way to look at it.
Take this trivial Prolog definition of natural numbers based on TNT (so 0 is 0, 1 is s(0), 2 is s(s(0)) etc):
n(0). % (1)
n(s(N)) :- n(N). % (2)
The declarative meaning is very clear. (1) says that 0 is a number. (2) says that s(N) is a number if N is a number. When called with a free variable:
?- n(X).
it gives you the expected X = 0 (from (1)), then looks at (2), and goes into a "new" invocation of n/1. In this new invocation, (1) succeeds, the recursive call to n/1 succeeds, and (2) succeeds with X = s(0). Then it looks at (2) of the new invocation, and so on, and so on.
This works by unification in the head of the second clause. Nothing stops you, however, from saying:
n_(0).
n_(S) :- n_(N), S = s(N).
This simply delays the unification of S with s(N) until after n_(N) is evaluated. As nothing happens between evaluating n_(N) and the unification, the result, when called with a free variable, is identical.
Do you see how this is isomorphic to your is_integer/1 predicate?
A word of warning. As pointed out in the comments, this query:
?- n_(0).
as well as the corresponding
?- is_integer(0).
have the annoying property of not terminating (you can call them with any natural number, not only 0). This is because after the first clause has been reached recursively, and the call succeeds, the second clause still gets evaluated. At that point you are "past" the end-of-recursion of the first clause.
n/1 defined above does not suffer from this, as Prolog can recognize by looking at the two clause heads that only one of them can succeed (in other words, the two clauses are mutually exclusive).
I attempted to put into a graphic #daniel's great answer. I found his answer enlightening and could not have figured out what was going on here without his help. I hope that this image helps someone the way that #daniel's answer helped me!

Symbolic block matrix calculation in Maple

In Maple,
restart; with(LinearAlgebra);
E := Matrix([[A, B]]);
E. Transpose(E);
yields
A^2 + B^2
However, I would like that Maple treat A and B as block matrices and yield
A.Transpose(A) + B.Transpose(B)
Is this possible?
You'll want to use the Maple assume() command for this (link). Scroll down that link, or ctrl-f and find the part where they show how to assume that a variable is a "SquareMatrix" type. Basically, Maple is treating your variables like they are real numbers, and you need to tell it not to do that. Once you get the assume statement right, it should print out the matrix-based solution.
If you get a lot of crufty extra symbols, this might be because Maple usually flags variables for which the assume() function was used (so the user remembers they are making an assumption about that variable). For example, it often replaces a with ~a if you issue an assume() regarding a. You can turn this off with the command interface(showassumed=0).

Resources