Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I’m trying to learning Forth directly in an embedded system and using Starting Forth by Leo Brodie as a text. The Forth version I’m using is 328eForth (a port of eforth to the ATmega328) which I’ve flashed into an Arduino Uno.
It appears that the DO LOOP words are not implemented in 328eForth - which puts a kink in my learning with Brodie. But looking at the dictionary using “WORDS” shows that a series of looping words exist e.g. BEGIN UNTIL WHILE FOR NEXT AFT EXIT AGAIN REPEAT amongst others.
My questions are as follows:
Q1 Why was DO LOOP omitted from 328eForth?
Q2 Can DO LOOP be implemented in other existing words? If so, how please and if not why? (I guess there must be a very good reason for the omission of DO LOOP...)
Q3 Can you give some commented examples of the 328eForth looping words?
Q1: A choice was made for a different loop construct.
Q2: The words FOR and NEXT perform a similar function that just counts down to 0 and runs exactly the specified number of times, including zero.
The ( n2 n1 -- ) DO ... LOOP always runs at least once, which requires additional (mental) bookkeeping. People have been complaining
about that as long back as I can remember.
Q3: The 382eforth documentation ForthArduino_1.pdf contains some examples.
Edit: Added some exposé to Q2
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm writing an R code, which calls C++, and C++ functions use a lot of parallel computing based on openMP. This is my first code using openMP and what I saw is that even setting the same C++ random seed, the code never gives the same results.
I read a lot of posts here, where it seems that this is an issue with openMP, but they are all old (between12 to 5 years ago)
I want to know if there are solutions now and if there are published article which explain this problem or/and possible solutions.
Thanks
You need to read up on parallel random number generation. This is not an OpenMP problem, but one that will afflict any use of random numbers in a parallel code.
Start with
Parallel Random Numbers: As Easy as 1, 2, 3 - The Salmons
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Alright, so one of my friends challenged me to get this done, but I just can't get very far....
What he asked me to do is to make a program that shows and counts ALL the possible parts of an entered number.
Example for 5:
1+1+1+1+1
2+1+1+1
3+1+1
4+1
5
3+2
2+2+1
I would like the program to be written in either C++ or some pseudocode, I wouldn't mind for either.
Anticipated thanks to you all!
Edit: Not duplicate. I requested a solution in c++; and the other one is in Python. Also, my question asks for ALL possible parts that added return the initial number.
For non zero partitions ( imagine boolean separators in array of 1)
2 ** (n-1)
This list would include both 2 + 3 and 3 + 2.
If you allow 0 then infinite.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am having a larger data set with more than 1 million entries. If I am running scripts it sometimes takes up a while till I get an output. Sometimes it seems that there is no output what so ever, even if I let it run for hours. Is there a way to track the progress of the computation (or maybe just see if it is not stuck)?
1. Start small
Write your analysis script and then test it using trivially small amounts of data. Gradually scale up and see how the runtime increases. The microbenchmark package is great at this. In the example below, I compare the amount of time it takes to run the same function with three different sized chunks of data.
library(microbenchmark)
long_running_function <- function(x) {
for(i in 1:nrow(x)) {
Sys.sleep(0.01)
}
}
microbenchmark(long_running_function(mtcars[1:5,]),
long_running_function(mtcars[1:10,]),
long_running_function(mtcars[1:15,]))
2. Look for functions that provide progress bars
I'm not sure what kind of analysis you're performing, but some packages already have this functionality. For example, ranger gives you more updates than the equivalent RandomForest functions.
3. Write your own progress updates
I regularly add print() or cat() statements to large code blocks to tell me when R has finished running a particular part of my analysis. Functions like txtProgressBar() let you add your own progress bars to functions as well.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
my question is rather general and not quite specific to wikipedia only, I would like to know is there a way to automate gneration and selection of search results. To give an eample of what I intend:
Let's say I'd like to write articles about American Food and I'd like to read information, such as ingredients, texture, cuisine(County-wise), preparation methods, etc. about approximately 500 different American foods. Let's say these are all available on Wiki too and I have an excel sheet with the names of these dishes and columns specifying their properties. But I dont want to manually look up these dishes/food-iems, can I automate this process? I am looking for some general guidance, some open-source links, some pseudo-code or algorithmic approach to this problem. Any help is appreciated.
Thanks.
P.S.: It'd be great if the logic had some links to help in carrying it out using R, since the other aspects of my project have already been built in R. Also i'd like to broaden my searches to include other major information gathering sites/search engines.
You can do it relatively quickly with use of the WikipediR package:
require(WikipediR)
phrs <- c("car","house")
j <- 1
for (i in phrs) {
pgs[j] <- page_content("en", "wikipedia", page_name = i, as_wikitext = TRUE)
j <- j + 1
}
The solution rather fortuitously assumes that your food names correspond to the page names on Wikipedia. Most probably this won't be the case for all the items. You may consider using the pages_in_category in order to source more pages at once. I presume that I would fist match my list against pages_in_category for a given category (foods) and if the number of errors is insignificant progressed to matching the data.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have a species abundance dataset with quite a few zeros in it and even when I set trymax = 1000 for metaMDS() the program is unable to find a stable solution for the stress. I have already tried combining data (collapsing multiple years together to reduce the number of zeros) and I can't do any more. I was just wondering if anyone knows - is it scientifically valid to pick what R gives me at the end (the lowest of the 1000 solutions) or should I not be using NMDS because it cannot find a stable spot? There seems to be very little information about this on the internet.
One explanation for this is that you are trying to use too few dimensions for the mapping. I presume you are using the default k = 2? If so, try k = 3 and compare the stress from the best solution you got from the 1000 tries for the k = 2 solution.
I would be a little concerned to take one solution out of 1000 just because it had the best/lowest stress.
You could also try 1000 more random starts and see if it converges if you run more iterations. When you saved the output from metaMDS(), you can supply that object to another call to metaMDS() via the previous.best argument. It will then do trymax further random starts but compare any lower-stress solutions with the previous best and converge if it finds one similar to it, rather than have to find two similar low-stress solutions in the 1000 starts.