Recursion in Labview - what is causing it to hang? [closed] - recursion

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 1 year ago.
Improve this question
Background: As a background I've been trying to learn LabVIEW and wanted to translate and english string to morse code. As per another post's suggestions I've solved this with a for loop, but was wondering why my recursive approach wasn't working. The way I was thinking of solving it was to have the string be split up into the 1st item in it and the rest, then check that item against a case selector for a letter that matches to the desired morse code value. The rest of the string would then become the input for the VI to run again leading to the output that collects all the strings. See attached code: In the image the code is running in highlight execution mode. The true case for the outer case diagram (for when the input1 is empty) merely returns the empty string constant. The recursive call (VI) is wired as shown here: Ignore the output (it's what I get when I run the code with nothing as an input (so the true case in the first case diagram happens). I'm just a little confused as to why my program runs indefinitely, prompting me to guess infinite recursion - but I get no such error (such as maximum depth reached). I'm honestly just curious how I could solve this problem recursively and think that maybe my true case (when string is empty) may play some role in it so for completion I've included it here: Thanks for any help!

I replicated your code as seen above, and it seemed to work fine completing its loop without concern.
The only thing I can think of is that this is not actually a code issue, but an operator issue. When you are running your code are you using 'Run', or 'Run Continuously'?

Related

Is it possible to modify the code of a predefined learner in mlr3? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I'm interested in locally removing these two dependencies in the "regr.svm" learner code:
Dependency 1: Cost
Dependency 2: Epsilon
I have tried without success with the trace() function:
trace(LearnerRegrSVM.R, edit = TRUE)
I opened a related issue on Gitgub a few days ago, but I have not received a response.
{mlr3} learners are wrappers around the original implementations and are not editable by the user.
If something is wrong, you are always welcome to submit a PR in the respective GitHub repo and we'll have a look.
IF you want to modify code quickly, you can always fork the repo, make adjustments yourself and use your own fork.
(Asking on Stackoverflow should usually include some code, otherwise people will flag to close the question.)

Is recursion a Bad Practice in general? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I always thought things that make your code uneasy to follow while being avoidable are considered as Bad Practice. Recursion seems to be one of these things (not to mention the problems it can cause like stack overflowing etc.), so I was a bit surprised when we had to learn working seriously with them during a programming course (for it ocassionally results in shorter code). And it seems to me there is a disagreement about it between the professors too...
People already asked this specific to languages (like Python, where a comment compared it to a goto statement), mostly specific to a problem, but I am interested in general:
Is recursion considered as a Bad Practice or not in the modern programming? When should I avoid it, and are there any circumstances when can't?
Related questions I found:
Is recursion a feature in and of itself? (does not discuss whether it is good or bad)
Is recursion ever faster than looping? (answer describes that recursion can result in improvements in functional languages, but is expensive in others), also other questions discussing the performance
Recursive programming is not a bad practice. It is a tool in your toolbox and like any tool, when it's the only tool used that's when bad things happen. Or when it's used out of a proper context.
When do you use recursion? It's good when you have a tree dataset that you need to perform the same logic upon. For example: You have a tree list of string elements and you wish to calculate total length of all strings, down one branch. You'd define a method to calculate the length of a list element and then see if the element has a sibling and if it does call the method, passing the sibling - and the process starts over.
The most common pitfall would be of a stack overflow. When the list is so large, it can't be handled all at once. So you would implement checks to ensure this never happens. Such as breaking the linked list down into manageable pieces and in your function, utilize a static variable to track the number of levels you traverse - bailing once that number is exceeded.
You should avoid using when your dataset is not a tree type dataset. The only time I can think of that you actually can't avoid using recursion is in programming languages that do not support looping (Haskell).

Get Branch and Bound (BAB) tree structure [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to achieve the BAB tree structure like,
I am trying to use R, matlab and CPLEX, but cannot figure it out.
In C++ you could retrieve the Branch-and-Bound (B&B) information via a Callback. In simple terms, a callback is an instruction that is declared before optimisation to CPLEX and whenever the condition is met during the B&B, CPLEX will stop and enter the callback to execute your code.
As you can see this is exactly what you need, although most people use them to impose cuts or valid inequalities as a walkaround to avoid setting a priori an exponential number of constraints, and only add them on-the-go. Nothing stops you from declaring a very general condition that will be satisfied at every node of the tree, and then extract all the information you might need and construct the tree based on that info. You only have to go read CPLEX documentation to determine which is the more suitable callback depending on your problem and need.
One is glad to be of service

Optimizing an SBCL Application Program for Speed [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've just finished and tested the core of a common lisp application and want to optimize it for speed now. It works with SBCL and makes use of CLOS.
Could someone outline the way to optimize my code for speed?
Where will I have to start? Will I just have to provide some global declaration or will I have to blow up my code with type information for each binding? Is there a way to find out which parts of my code could be compiled better with further type information?
The programm makes heavy use of a single 1-dimensional array 0..119 where it shifts CLOS-Instances around.
Thank you you Advance!
It's not great to optimize in a vacuum, because there's no limit to the ugliness you can introduce to make things some fraction of a percent faster.
If it's not fast enough, it's helpful to define what success means so you know when to stop.
With that in mind, a good first pass is to run your project under the profiler (sb-sprof) to get an idea of where the time is spent. If it's in generic arithmetic, it can help to judiciously use modular arithmetic in inner loops. If it's in CLOS stuff, it might possibly help to switch to structures for key bits of data. Whatever's the most wasteful will direct where to spend your effort in optimization.
I think it could be helpful if, after profiling, you post a followup question along the lines of "A lot of my program's time is spent in <foo>, how do I make it faster?"

What is "large margin optimization"? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 8 years ago.
Questions concerning problems with code you've written must describe the specific problem — and include valid code to reproduce it — in the question itself. See SSCCE.org for guidance.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
Could someone please explain what large margin optimization is, in a machine learning context? Everything I've found is extremely complicated and I don't know where to start.
Thanks in advance.
In classification, the largest margin problem is simply a search for the separating boundary (hyperplane in most cases) which maximizes the margin aroud it (minimum distance to the objects of each class).
In the simple case of two dimensional data, you can think of it like a search for such line, that correctly separates elements of one class from the other one, and in the same time maximizes the sum of distances to the closest points from both classes. Following image from wikipedia shows such separating line found using Support Vector Machine:
This geometrical concept is very important, as it makes solution unique - if we would simply search for a line that separates our data we would have infinitely many such solutions, and would have a problem to choose a particular one. Largest margin concept shows us exactly which line we want, and as a result, optimization process performed in this problem is generally repeatable (and as numerous experiments have shown - very effective).

Resources