I'm learning metastable and using 2-FF sychronizer .
And I.m trying to figure out it meaning.
but it seem to almost example is about click button and cause it so we need 2-FF sychronizer to solve it.
Is there any other example code of verilog that I can try to test on my own computer?
I think it will more understandable and compare 1-FF and 2-FF by result.
thanks all.
Metastability is the result of analog behavior. You cannot simulate metastability problems with a digital event driven simulator. You can only verify that the solution doesn't cause other functional issues. CDC/RDC tools can help statically (i.e. without dynamic simulation) catch design flaws where metastability might arise and make sure they have been properly corrected
Related
I am creating this new topic because I am using the OpenMDAO platform, and more specifically its design of experiment option. I would like to know if there is a proper way to interrupt and stop the computations if a condition is met in my program.
I have already used OpenMDAO optimizers to study and solve some problems and to stop the computations I used to raise an Exception to stop the program. This strategy seems to work for optimizers but not so much when I am using the LatinHypercubeGenerator driver: it is like the OpenMDAO program is still trying to compute the points even if Exception or RuntimeError are raise within the OpenMDAO explicit component function "compute".
In that respect I am wondering if there is a way to kill OpenMDAO during calculations. I tried to check if an OpenMDAO built-in attribute or method could do the job, but I have not found anything.
Does anyone know how to stop OpenMDAO DOE computations?
Many thanks in advance for any advice/help
As of OpenMDAO V3.18, there is no way to add some kind off a stopping condition to the DOE driver. You mention using AnalysisError to achieve this for other optimizers. This won't work in general either, since some drivers will intentionally catch those errors, react, and attempt to keep running the optimization.
You can see the run code of the driver, where a for loop is made and some try/catch blocks are used to record the success/failure of specific cases.
My suggestion for achieving what you want would be to copy the driver code into your model directory and make your own custom drivers. You can add whatever kind of termination condition you like, either based on results of a single case or some statistical analysis of the currently run cases.
If you come up with a clean way of doing it, you can always submit a POEM and/or a pull request to propose adding your new functionality to the mainline of OpenMDAO.
I am currently doing code testing in sdaccel2018.2. When transmitting between the host and the device, I use a custom data structure. The current results show that the results of the software simulation are completely correct, but the same code is used in the hardware simulation, under the environment, the result of the operation is completely wrong, and there is even a situation where the data on the device side is not read. (Here I think it can be ruled out the problem of the host-side code, because there is no problem with software emulation). Since the emulation doesn't give any hints about errors, I'm not sure where the problem is. I even just did data in and data out, but still couldn't get the correct result. Not sure if anyone can give some advice?
I would like to implement in GameMaker exactly the same thing as in this article: http://www.redblobgames.com/articles/visibility/.
The code for it is available there in different languages, but I can't figure out how to effectively port it to GML. Every raycasting solution I tried leads to fps completely dying.
Could someone with more knowledge than me help ?
i have encountered this problem too, it mainly stems from the gamemakers execution speed.
check out the gm tech blog post on this here.
Also, this will probably work best as a shader as they run faster than object step events.
good luck!
I've got a bit of an issue. I'm writing and continually developing an ASP.NET MVC application. The problem is, everytime I update one small part of our site (not even anything to do with our database), it seems to break about 4 other things in other parts of the site. I've been doing my best to anticipate it, but in the end, I know there are better ways out there to test, and I'm just wondering what the general consensus is for best practices?
Thanks!
In short - not organized answer!
Have a plan.
Know what the change is.
Document what you are going to test and why.
Maintain a test suite (Regression) to execute after all major changes.
Keep improving your test cases and adding the ones you missed
(there should be a test case associated with all your bugs.
This will ensure the same bug doesn't get introduced due to regression.
And like you said have a proper structure, ensure process implementation, have better code reviews (catch the bugs before they get submitted) and keep reading a lot about all this.
This answer is open for editing to make it better.
I have two questions-
Q1. Is there a more efficient way to handle the error situation in MPI, other than check-point/rollback? I see that if a node "dies", the program halts abruptly.. Is there any way to go ahead with the execution after a node dies ?? (no issues if it is at the cost of accuracy)
Q2. I read in "http://stackoverflow.com/questions/144309/what-is-the-best-mpi-implementation", that OpenMPI has better fault tolerance and recently MPICH-2 has also come up with similar features.. does anybody know what they are and how to use them? is it a "mode"? can they help in the situation stated in Q1 ?
kindly reply. Thank you.
MPI - all implementations - have had the ability to continue after an error for a while. The default is to die - that is, the default error handler is MPI_ERRORS_ARE_FATAL - but that can be set (eg, see the discussion here). But the standard doesn't currently much beyond that; that is, it's hard to recover and continue after such an error. If your program is sufficiently simple - some sort of master-worker type of setup - it may be possible to continue this way.
The MPI forum is currently working on what will become MPI-3, and error handling and fault tolerance will be an important component of the new standard (there's a working group dedicated to the topic). Until that work is complete, however, the only way to get stronger fault tolerance out of MPI is to use earlier, nonstandard, extensions. FT-MPI was a project that developed a very robust MPI, but unfortuantely it's based on MPI1.2; a very early version of the standard. The claim here is that they're now working with OpenMPI, but I don't know what's become of that. There's MPICH-V, based on MPI2, but that's more checkpoint-restart based than what I think you're looking for.
Updated to add: The fault tolerance didn't make it into MPI-3, but the working group continues its work and the expectation is that something will result from that before too long.