By reference a pointer in C - pointers

void foo(structT* P){
P = P->next;
return;
}
void func(structT* P){
foo(P);
return 0;
}
In func(), it calls foo(P) which passes a pointer. And in foo(), the P get updated. Now, how do I get the updated value in func()? How do I use passing by reference in this case in C?

Without further judging whether this is a good idea for your real-world design (a foo function has by definition unspecified purpose; just note that mutating input params is often not a good idea), here is a quick solution.
You cannot in C. But you can pass a pointer to pointer:
void foo(structT** P){
*P = (**P).next; // or "(*P)->next"
return;
}
void func(structT* P){
foo(&P);
return 0;
}
Anecdote and warning from http://c2.com/cgi/wiki?ThreeStarProgrammer:
Three Star Programmer
A rating system for C-programmers. The more indirect your pointers are (i.e. the more "*" before your variables), the higher your reputation will be. No-star C-programmers are virtually non-existent, as virtually all non-trivial programs require use of pointers. Most are one-star programmers. In the old times (well, I'm young, so these look like old times to me at least), one would occasionally find a piece of code done by a three-star programmer and shiver with awe.
Some people even claimed they'd seen three-star code with function pointers involved, on more than one level of indirection. Sounded as real as UFOs to me.
Just to be clear: Being called a ThreeStarProgrammer is usually not a compliment.

structT* foo(structT* P)
{
if ( P != NULL )
return P->next;
else
return NULL;
}
void func(structT* P){
structT* P1 = foo(P);
}

Related

is visibility also a reflection of reordering?about volatile,i think visibility and reordering are the same essentially

about volatile,i think visibility and reordering are the same essentially.
for example:
atomic<int> g_payLoad = {0};
atomic<int> g_guard = {0};
// thread 0
void foo1()
{
g_payLoad.store(42, memory_order_relaxed);
g_guard.store(1, memory_order_relaxed);
}
// thread 1
void foo2()
{
int r = g_guard.load(memory_order_relaxed);
if (r)
{
r = g_payLoad.load(memory_order_relaxed);
}
}
Since g_guard and g_payLoad are both read and written in a relaxed way, we may think that g_guard.load() and g_payLoad.load() in foo2() may be out of order (load from different addresses, cpu may speculate & prefetch, etc.),But from another point of view, the fact that g_payLoad is reordered to g_guard before (the same for other types of disorder) is equivalent to g_payLoad's changes in foo1 not being seen by foo2, two different perspectives on the same thing, that's all.
Do I understand it right?

Calculating number of nodes in BST using recursion c++

I'm trying to find the number of nodes in a BST using recursion. Here is my code
struct Node{
int key;
struct Node* left;
struct Node* right;
Node(){
int key = 0;
struct Node* left = nullptr;
struct Node* right = nullptr;
}
};
src_root is the address of the root node of the tree.
int BST::countNodes(Node* src_root, int sum){
if((src_root==root && src_root==nullptr) || src_root==nullptr)
return 0;
else if(src_root->left==nullptr || src_root->right==nullptr)
return sum;
return countNodes(src_root->left, sum + 1) + countNodes(src_root->right, sum + 1) + 1;
}
However my code only seems to work if there are 3 nodes. Anything greater than 3 gives wrong answer. Please help me find out what's wrong with it. Thanks!
It is a long time ago since I made anything in C/C++ so if there might be some syntax errors.
int BST::countNodes(Node *scr_root)
{
if (scr_root == null) return 0;
return 1 + countNodes(scr_root->left) + countNodes(scr_root->right);
}
I think that will do the job.
You have several logical and structural problems in your implementation. Casperah gave you the "clean" answer that I assume you already found on the web (if you haven't already done that research, you shouldn't have posted your question). Thus, what you're looking for is not someone else's solution, but how to fix your own.
Why do you pass sum down the tree? Lower nodes shouldn't care what the previous count is; it's the parent's job to accumulate the counts from its children. See how that's done in Casperah's answer? Drop the extra parameter from your code; it's merely another source for error.
Your base case has an identically false clause: src_root==root && src_root==nullptr ... if you make a meaningful call, src_root cannot be both root and nullptr.
Why are you comparing against a global value, root? Each call simply gets its own job done and returns. When your call tree crawls back to the original invocation, the one that was called with the root, it simply does its job and returns to the calling program. This should not be a special case.
Your else clause is wrong: it says that if either child is null, you ignore counting the other child altogether and return only the count so far. This guarantees that you'll give the wrong answer unless the tree is absolutely balanced and filled, a total of 2^N - 1 nodes for N levels.
Fix those items in whatever order you find instructive; the idea is to learn. Note, however, that your final code should look a lot like the answer Casperah provided.

Practical uses for Recursion

Are there any times where it is better to use recursion for a task instead of any other methods? By recursion I am referring to:
int factorial(int value)
{
if (value == 0)
{
return 1;
} else
{
return value * factorial(value - 1);
}
}
Well, there are a few reasons I can think of.
Recursion is often easier to understand than a purely iterative solution. For example, in the case of recursive-descent parsers.
In compilers with support for tail call optimization, there's no additional overhead to using recursion over iteration, and it often results in fewer lines of code (and, as a result, fewer bugs).
First of all your example doesn't make any sense.
The way you wrote it would just lead to an endless loop without any result ever.
A "real" function would more look like this:
int factorial(int value)
{
if (value == 0)
return 1;
else
return value * factorial(value - 1);
}
Of course you could accomplish the same thing with a loop (which might even be better, especially if the function call incurs the penalty of a stack frame). Usually, when people use recursion they do so because it's easier to read (for certain problem domains).

Does recursive method increase cyclomatric complexity

I do not have any programs installed for measuring cyclomatric code complexity at the moment. But I was wondering does a recursive method increases the complexity?
e.g.
// just a simple C# example to recursively find an int[]
// within a pile of string[]
private int[] extractInts(string[] s)
{
foreach (string s1 in s)
{
if (s1.ints.length < 0)
{
extractInts(s1);
}
else
{
return ints;
}
}
}
Thanks.
As far as I understand, no. There is only one linearly independent path to the recursive method in your example, so it wouldn't increase the cyclomatic complexity.
Loops do increase cyclomatic complexity.
A loop can often be rewritten using recursion plus a guard condition.
Even if the recursive call itself would not count strictly as an increment, the guard condition does. This makes the loop and recursion+guard on par.

Reentrancy and recursion

Would it be a true statement to say that every recursive function needs to be reentrant?
If by reentrant you mean that a further call to the function may begin before a previous one has ended, then yes, all recursive functions happen to be reentrant, because recursion implies reentrance in that sense.
However, "reentrant" is sometimes used as a synonym for "thread safe", which is introduces a lot of other requirements, and in that sense, the answer is no. In single-threaded recursion, we have the special case that only one "instance" of the function will be executing at a time, because the "idle" instances on the stack are each waiting for their "child" instance to return.
No, I recall a factorial function that works with static (global) variables. Having static (global) variables goes against being reentrant, and still the function is recursive.
global i;
factorial()
{ if i == 0 return 1;
else { i = i -1; return i*factorial();
}
This function is recursive and it's non-reentrant.
'Reentrant' normally means that the function can be entered more than once, simultaneously, by two different threads.
To be reentrant, it has to do things like protect/lock access to static state.
A recursive function (on the other hand) doesn't need to protect/lock access to static state, because it's only executing one statement at a time.
So: no.
Not at all.
Why shouldn't a recursive function be able to have static data, for example? Should it not be able to lock on critical sections?
Consider:
sem_t mutex;
int calls = 0;
int fib(int n)
{
down(mutex); // lock for critical section - not reentrant per def.
calls++; // global varible - not reentrant per def.
up(mutex);
if (n==1 || n==0)
return 1;
else
return fib(n-1) + fib(n-2);
}
This does not go to say that writing a recursive and reentrant function is easy, neither that it is a common pattern, nor that it is recommended in any way. But it is possible.

Resources