I simulated a 4-bit ripple adder made up of 4 full adders in Verilog. Here, I'm trying to understand what is happening with Cout. Cout stands for carry output. I can't explain how values E and F were obtained in Cout.
This is ripple_adder.v
module full_adder( A, B, CIN, Q, COUT );
input A, B, CIN;
output Q, COUT;
assign Q = A ^ B ^ CIN;
assign COUT = (A & B) | (B & CIN) | (CIN & A);
endmodule
module adder_ripple( a, b, q );
input [3:0] a, b;
output [3:0] q;
wire [3:0] cout;
full_adder add0 ( .Q(q[0]), .COUT(cout[0]),
.A(a[0]), .B(b[0]), .CIN( 1'b0) );
full_adder add1 ( .Q(q[1]), .COUT(cout[1]),
.A(a[1]), .B(b[1]), .CIN(cout[0]) );
full_adder add2 ( .Q(q[2]), .COUT(cout[2]),
.A(a[2]), .B(b[2]), .CIN(cout[1]) );
full_adder add3 ( .Q(q[3]), .COUT(cout[3]),
.A(a[3]), .B(b[3]), .CIN(cout[2]) );
endmodule
This is test bench for ripple_adder.v
`timescale 1ps/1ps
module adder_ripple_tp;
reg [3:0] a, b; // reg declaration for input
wire [3:0] q; // wire declaration for output
parameter STEP = 100000;
adder_ripple adder_ripple( a, b, q );
initial begin
$dumpfile("adder_ripple.vcd");
$dumpvars(0, adder_ripple_tp);
a = 4'h0; b = 4'h0;
#STEP a = 4'h5; b = 4'ha;
#STEP a = 4'h7; b = 4'ha;
#STEP a = 4'h1; b = 4'hf;
#STEP a = 4'hf; b = 4'hf;
#STEP $finish;
end
initial $monitor( $stime, " a=%h b=%h q=%h", a, b, q );
endmodule
The wave looks like this:
Can someone help me understand it?
The value of cout[3] represents the value of the 2^4=16, when it is asserted,0 when de-asserted.
For the vector where a=7, b=0xa=10, the answer is 17, which is indicated by the sum of value of q=1 + the value of cout[3] = 16.
cout is equal to 0xe=4'b1110 in this vector, indicating the sum of the least significant digit did not carry out, and the sum of each of the other digits did carry out.
For the vector where a=1, b=0xf=15, the answer is 16, which is indicated by the sum of q=0 + the value of cout[3] which is 16.
cout is equal to 0xf=4'b1111 in this state indicating the sum of each digits carried out.
For the vector where a=0xf=15, b=0xf=15, the answer is 30, which is indicated by the sum of q=0xe=14 + the value of cout[3] which is 16.
cout is equal to 0xf=4'b1111 in this state indicating the sum of each digits carried out.
Related
I am very new in Fortran and I am stuck with the following program to find roots using quadratic equation.
It is showing the following error:
d = sqrt(bsq \xE2\x80\x93 ac4)
1
Error: Syntax error in argument list at (1)
program quadratic
implicit none
real :: a, b, c, root1, root2
real :: bsq, ac4, d
print *, 'Please enter the coefficients a, b, and c as real numbers'
read *, a, b, c
bsq = b*b
ac4 = 4*a*c
if ( bsq < ac4) then
d = sqrt(bsq – ac4)
root1 = (-b+d)/(2*a)
root2 = (-b+d)/(2*a)
print *, 'The real roots are ', root1, root2
else if ( root1==root2) then
root1 = root2
print *, 'There is one real root which is ', root1
else
print *, 'There are no real roots'
end if
end program quadratic
You need a minus sign between bsq and ac4, not a dash. Look closely.
Minus sign: -
Dash: –
Problem Statement: The Fibonacci word sequence of bit strings is defined as:
F(0) = 0, F(1) = 1
F(n − 1) + F(n − 2) if n ≥ 2
For example : F(2) = F(1) + F(0) = 10, F(3) = F(2) + F(1) = 101, etc.
Given a bit pattern p and a number n, how often does p occur in F(n)?
Input:
The first line of each test case contains the integer n (0 ≤ n ≤ 100). The second line contains the bit
pattern p. The pattern p is nonempty and has a length of at most 100 000 characters.
Output:
For each test case, display its case number followed by the number of occurrences of the bit pattern p in
F(n). Occurrences may overlap. The number of occurrences will be less than 2^63.
Sample input: 6 10 Sample output: Case 1: 5
I implemented a divide and conquer algorithm to solve this problem, based on the hints that I found on the internet: We can think of the process of going from F(n-1) to F(n) as a string replacement rule: every '1' becomes '10' and '0' becomes '1'. Here is my code:
#include <string>
#include <iostream>
using namespace std;
#define LL long long int
LL count = 0;
string F[40];
void find(LL n, char ch1,char ch2 ){//Find occurences of eiher "11" / "01" / "10" in F[n]
LL n1 = F[n].length();
for (int i = 0;i+1 <n1;++i){
if (F[n].at(i)==ch1&&F[n].at(i+1)==ch2) ++ count;
}
}
void find(char ch, LL n){
LL n1 = F[n].length();
for (int i = 0;i<n1;++i){
if (F[n].at(i)==ch) ++count;
}
}
void solve(string p, LL n){//Recursion
// cout << p << endl;
LL n1 = p.length();
if (n<=1&&n1>=2) return;//return if string pattern p's size is larger than F(n)
//When p's size is reduced to 2 or 1, it's small enough now that we can search for p directly in F(n)
if (n1<=2){
if (n1 == 2){
if (p=="00") return;//Return since there can't be two subsequent '0' in F(n) for any n
else find(n,p.at(0),p.at(1));
return;
}
if (n1 == 1){
if (p=="1") find('1',n);
else find('0',n);
return;
}
}
string p1, p2;//if the last character in p is 1, we can replace it with either '1' or '0'
//p1 stores the substring ending in '1' and p2 stores the substring ending in '0'
for (LL i = 0;i<n1;++i){//We replace every "10" with 1, "1" with 0.
if (p[i]=='1'){
if (p[i+1]=='0'&&(i+1)!= n1){
if (p[i+2]=='0'&&(i+2)!= n1) return;//Return if there are two subsequent '0'
p1.append("1");//Replace "10" with "1"
++i;
}
else {
p1.append("0");//Replace "1" with "0"
}
}
else {
if (p[i+1]=='0'&&(i+1)!= n1){//Return if there are two subsequent '0'
return;
}
p1.append("1");
}
}
solve(p1,n-1);
if (p[n1-1]=='1'){
p2 = p1;
p2.back() = '1';
solve(p2,n-1);
}
}
main(){
F[0] = "0";F[1] = "1";
for (int i = 2;i<38;++i){
F[i].append(F[i-1]);
F[i].append(F[i-2]);
}//precalculate F(0) to F(37)
LL t = 0;//NumofTestcases
int n; string p;
while (cin >> n >> p) {
count = 0;
solve(p,n);
cout << "Case " << ++t << ": " << count << endl;
}
}
The above program works fine, but with small inputs only. When i submitted the above program to codeforces i got an answer wrong because although i shortened the pattern string p and reduces n to n', the size of F[n'] is still very large (n'>=50). How can i modify my code to make it works in this case, or is there another approach (such as dynamic programming?). Many thanks for any advice.
More details about the problem can be found here: https://codeforces.com/group/Ir5CI6f3FD/contest/273369/problem/B
I don't have time now to try to code this up myself, but I have a suggested approach.
First, I should note, that while that hint you used is certainly accurate, I don't see any straightforward way to solve the problem. Perhaps the correct follow-up to that would be simpler than what I'm suggesting.
My approach:
Find the first two ns such that length(F(n)) >= length(pattern). Calculating these is a simple recursion. The important insight is that every subsequent value will start with one of these two values, and will also end with one of them. (This is true for all adjacent values -- for any m > n, F(m) will begin either with F(n) or with F(n - 1). It's not hard to see why.)
Calculate and cache the number of occurrences of the pattern in this these two Fs, but whatever index shifting technique makes sense.
For F(n+1) (and all subsequent values) calculate by adding together
The count for F(n)
The count for F(n - 1)
The count for those spanning both F(n) and F(n - 1). We can achieve that by testing every breakdown of pattern into (nonempty) prefix and suffix values (i.e., splitting at every internal index) and counting those where F(n) ends in prefix and F(n - 1) starts with suffix. But we don't have to have all of F(n) and F(n - 1) to do this. We just need the tail of F(n) and the head of F(n - 1) of the length of the pattern. So we don't need to calculate all of F(n). We just need to know which of those two initial values our current one ends with. But the start is always the predecessor, and the end oscillates between the previous two. It should be easy to keep track.
The time complexity then should be proportional to the product of n and the length of the pattern.
If I find time tomorrow, I'll see if I can code this up. But it won't be in C -- those years were short and long gone.
Collecting the list of prefix/suffix pairs can be done once ahead of time
I am trying to write a program which prints Pythagorean triples (a^2 + b^2 = c^2) for a given range N where a<=b<=c<=N.
#include <stdio.h>
int main()
{
int a = 0, b = 0, c = 0, N, T,c2;
scanf("%d", &T);
while(T--)
{
int counter = 0;
scanf("%d", &N);
{
for (c = 0; c <=N; c++)
{
for (b = 0; b < c; b++)
{
for (a = 0; a < b; a++)
{
c2 = c*c;
if (a*a + b*b == c2 )
//if(sqrt (pow(a,2) + pow(b,2)) == c)
{
++counter;
printf("\n %d , %d, %d \n",a,b,c); }
}
}
}
}
printf("%d\n", counter);
}
return 0;
}
This works well for N<1000. For higher N, say 10000 this takes a lot of time.
Is there any better way to optimize this prog or any better algorithm instead of brute force, so that it takes less time to compute for higher N ?
By number theory, Pythagorean triples are parametrized by (2pq, p^2-q^2, p^2+q^2). You can enumerate over these and just abort whenever c > N. This is of course optimal since you do as many computations as there are triples...
First, you can compute c2 as soon as a new value of c is available:
for (c = 0; c <=N; c++)
{
/* compute c2 here */
This saves the time to compute it over and over for each b and a.
The same goes for b: it is possible to compute the square of b as soon as b is available, instead of for each value of a. Your compiler may automatically apply these optimizations, but not necessarily.
Lastly, there is only one value of a that can make the equation true, and this value is sqrt(c2 - b2). For large values of b, it is much faster to compute this expression and to check whether it makes the equation true than to test all values between 0 and b. If you use double-precision computation for sqrt(c2 - b2), then floating-point approximations will not be an issue until N is about 226.
You can reduce one variable by the following calculations. If we assume that
a = m^2 - n^2, b = 2mn, c = m^2 + n^2
Lets check if it satisfies a^2 + b^2 = c^2:
a^2+b^2 = (m^2 - n^2)^2 + (2mn)^2 = (m^2 + n^2)^2 = c^2.
Now, we can iterate through all possible m and n and generate corresponding a, b, c.
It's the fastest method I ever used. However, I don't know if there exist any O(1) or O(log(n)) mathematical solutions.
Can anyone explain to me in detail how this log2 function works:
inline float fast_log2 (float val)
{
int * const exp_ptr = reinterpret_cast <int *> (&val);
int x = *exp_ptr;
const int log_2 = ((x >> 23) & 255) - 128;
x &= ~(255 << 23);
x += 127 << 23;
*exp_ptr = x;
val = ((-1.0f/3) * val + 2) * val - 2.0f/3; // (1)
return (val + log_2);
}
IEEE floats internally have an exponent E and a mantissa M, each represented as binary integers. The actual value is basically
2^E * M
Basic logarithmic math says:
log2(2^E * M)
= log2(2^E) + log2(M)
= E + log2(M)
The first part of your code separates E and M. The line commented (1) computes log2(M) by using a polynomial approximation. The final line adds E and the result of the approximation.
It's an approximation. It first takes log2 of the exponent directly (trivial to do), then uses an approximation formula for log2 of the mantissa. It then adds these two log2 components to give the final result.
Design a function f such that:
f(f(x)) == 1/x
Where x is a 32 bit float
Or how about
Given a function f, find a function g
such that
f(x) == g(g(x))
See Also
Interview question: f(f(n)) == -n
For the first part: this one is more trivial than f(f(x)) = -x, IMO:
float f(float x)
{
return x >= 0 ? -1.0/x : -x;
}
The second part is an interesting question and an obvious generalization of the original question that this question was based on. There are two basic approaches:
a numerical method, such that x ≠ f(x) ≠ f(f(x)), which I believe was more in the spirit of the original question, but I don't think is possible in the general case
a method that involves g(g(x)) invoking f exactly once
Well, here's the C quick hack:
extern double f(double x);
double g(double x)
{
static int parity = 0;
parity ^= 1;
return (parity ? x : f(x));
}
However, this breaks down if you do:
a = g(4.0); // => a = 4.0, parity = 1
b = g(2.0); // => b = f(2.0), parity = 0
c = g(a); // => c = 4.0, parity = 1
d = g(b); // => d = f(f(2.0)), parity = 0
In general, if f is a bijection f : D → D, what you need is a function σ that partitions the domain D into A and B such that:
D = A ∪ B, ( the partition is total )
∅ = A ∩ B (the partition is disjoint )
σ(a) ∈ B, f(a) ∈ A ∀ a ∈ A,
σ(b) ∈ A, f(b) ∈ B ∀ b ∈ B,
σ has an inverse σ-1 s.t. σ(σ-1(d)) = σ-1(σ(d)) = d ∀ d ∈ D.
σ(f(d)) = f(σ(d)) ∀ d ∈ D
Then, you can define g thusly:
g(a) = σ(f(a)) ∀ a ∈ A
g(b) = σ-1(b) ∀ b ∈ B
This works b/c
∀ a ∈ A, g(g(a)) = g(σ(f(a)). By (3), f(a) ∈ A so σ(f(a)) ∈ B so g(σ(f(a)) = σ-1(σ(f(a))) = f(a).
∀ b ∈ B, g(g(b)) = g(σ-1(b)). By (4), σ-1(b) ∈ A so g(σ-1(b)) = σ(f(σ-1(b))) = f(σ(σ-1(b))) = f(b).
You can see from Miles answer that, if we ignore 0, then the operation σ(x) = -x works for f(x) = 1/x. You can check 1-6 (for D = nonzero reals), with A being the positive numbers, and B being the negative numbers yourself. With the double precision standard, there's a +0, a -0, a +inf, and a -inf, and these can be used to make the domain total (apply to all double precision numbers, not just the nonzero).
The same method can be applied to the f(x) = -1 problem - the accepted solution there partitions the space by the remainder mod 2, using σ(x) = (x - 1), handling the zero case specially.
I like the javascript/lambda suggestion from the earlier thread:
function f(x)
{
if (typeof x == "function")
return x();
else
return function () {return 1/x;}
}
The other solutions hint at needing extra state. Here's a more mathematical justification of that:
let f(x) = 1/(x^i)= x^-i
(where ^ denotes exponent, and i is the imaginary constant sqrt(-1) )
f(f(x)) = (x^-i)^-i) = x^(-i*-i) = x^(-1) = 1/x
So a solution exists for complex numbers. I don't know if there is a general solution sticking strictly to Real numbers.
If f(x) == g(g(x)), then g is known as the functional square root of f. I don't think there's closed form in general even if you allow x to be complex (you may want to go to mathoverflow to discuss :) ).
Again, it's specified as a 32-bit number. Make the return have more bits, use them to carry your state information between calls.
Const
Flag = $100000000;
Function F(X : 32bit) : 64bit;
Begin
If (64BitInt(X) And Flag) > 0 then
Result := g(32bit(X))
Else
Result := 32BitInt(X) Or Flag;
End;
for any function g and any 32-bit datatype 32bit.
There is another way to solve this and it uses the concept of fractional linear transformations. These are functions that send x->(ax+b)/(cx+d) where a,b,c,d are real numbers.
For example you can prove using some algebra that if f is defined by f(x)=(ax+1)(-x+d) where a^2=d^2=1 and a+d<>0 then f(f(x))=1/x for all real x. Choosing a=1,d=1, this give a solution to the problem in C++:
float f(float x)
{
return (x+1)/(-x+1);
}
The proof is f(f(x))=f((x+1)/(-x+1))=((x+1)/(-x+1)+1)/(-(x+1)/(-x+1)+1)
= (2/(1-x))/(2x/(1-x))=1/x on cancelling (1-x).
This doesn't work for x=1 or x=0 unless we allow an "infinite" value to be defined that satisfies 1/inf = 0, 1/0 = inf.
a C++ solution for g(g(x)) == f(x):
struct X{
double val;
};
X g(double x){
X ret = {x};
return ret;
}
double g(X x){
return f(x.val);
}
here is one a bit shorter version (i like this one better :-) )
struct X{
X(double){}
bool operator==(double) const{
return true
}
};
X g(X x){
return X();
}
Based on this answer, a solution to the generalized version (as a Perl one-liner):
sub g { $_[0] > 0 ? -f($_[0]) : -$_[0] }
Should always flip the variable's sign (a.k.a. state) twice, and should always call f() only once. For those languages not fortunate enough for Perl's implicit returns, just pop in a return before the { and you're good.
This solution works as long as f() does not change the variable's sign. In that case, it returns the original result (for negative numbers) or the result of f(f()) (for positive numbers). An alternative could store the variable's state in even/odd like the answers to the previous question, but then it breaks if f() changes (or can change) the variable's value. A better answer, as has been said, is the lambda solution. Here is a similar but different solution in Perl (uses references, but same concept):
sub g {
if(ref $_[0]) {
return ${$_[0]};
} else {
local $var = f($_[0]);
return \$var;
}
}
Note: This is tested, and does not work. It always returns a reference to a scalar (and it's always the same reference). I've tried a few things, but this code shows the general idea, and though my implementation is wrong and the approach may even be flawed, it's a step in the right direction. With a few tricks, you could even use a string:
use String::Util qw(looks_like_number);
sub g {
return "s" . f($_[0]) if looks_like_number $_[0];
return substr $_[0], 1;
}
try this
MessageBox.Show( "x = " + x );
MessageBox.Show( "value of x + x is " + ( x + x ) );
MessageBox.Show( "x =" );
MessageBox.Show( ( x + y ) + " = " + ( y + x ) );