dot graph generation too slow around 5000 nodes and 7000 edges - dot

I convert a graphviz dot file to pdf which has 5000 nodes and 7000 edges.
Two hours passed, it's not finished yet.
is there a fastway to do it?

There is no single answer.
add -v to the command line to help understand where dot is getting stuck (dot -v -Txxx ...)
try using one of the other layout engines (fdp, neato, circo, ...)
The Dot User Guide says: For completeness, we note that dot also provides access to various parameters which play technical roles in the layout algorithms. These include mclimit, nslimit, nslimit1, remincross and searchsize.
This web search leads to more help: mclimit OR maxiter dot

Related

2D Raycasting/Checking if 2 line segments intersect

How would I make a 2D raycast? Also, how would I check if 2 line segments intersect (relativity the same thing in my eyes, probably different though). I am not using unity or anything, I am just using plain python (I can translate from most languages to python so I don't really care what language you use) and don't want to use a library so I can learn. But every article I look at has no actual explanation, it just shows code. I've looked at the Geeks4Geeks one and that also really only shows code and does not explain what it does. So if someone could explain it that would be helpful.

When adding "--" to animate a slide in RMarkdown, bullet point shifts

I'm writing up a presentation in RMarkdown. When using the -- between 2 lines (to animate the slide), I sometimes have an extra indentations between the bullet and the text.
See the source code here.
---
## Find patterns in random processes
- Simulations are useful to test the properties of randomly generated data
- Since we designed the simulation, we know parameters of the processes that underlie it.
--
- It is then possible to test various methods to
1. see if they work and verify their assumptions,
2. do power analysis,
3. learn how data is generated
4. etc.
Before:
After:
Just found this answer here:
https://github.com/gnab/remark/wiki/Markdown#incremental-slides
Basically, you shall NOT add a space before the to dashes --
So the code should be:
---
## Find patterns in random processes
- Simulations are useful to test the properties of randomly generated data
- Since we designed the simulation, we know parameters of the processes that underlie it.
--
- It is then possible to test various methods to
1. see if they work and verify their assumptions,
2. do power analysis,
3. learn how data is generated
4. etc.
I tested it and it now works.

How to find all possible paths that pass through each node once in an undirected graph

I have an adjacency matrix describing an undirected graph and I need to find all possible paths that pass through each and every node once.
My graph has 25 nodes, therefore any path with length < 25 should be discarded.
I'd prefer the input to be and adjacency matrix rather than typing out each connection (the matrix is quite big).
I tried looking into DFS algorithms but they often require a start and end node or will display all the paths inline without showing each individual path.
I need my algorithm to look through every possible path, starting from any node and ending wherever it can.
If you have any idea of how I can start this project it'd be greatly appreciated.
I'm cool with C++, Java, js, or Python.
Thank you!

Can I implement a small subset of Curses in pure C++ (or any similar language) easily?

(I couldn't find anything related to this, as I don't know what keywords to search for).
I want a simple function - one that prints 3 lines, then erases the 3 lines and replaces with new ones. If it were a single line, I could just print \r or \b and overwrite it.
How can I do this without a Curses library? There must be some escape codes or something for this.
I found some escape codes to print colored text, so I'm guessing there is something similar to overwrite previous lines.
I want this to run on OSX and Ubuntu at least.
Edit: I found this - http://www.perlmonks.org/?displaytype=displaycode;node_id=575125
Is there a list of ALL such available commands?
(Short answer: Yes. See "ANSI Escape code" in Wikipedia for a complete list of ANSI sequences. Your terminal may or may not be ANSI, but ANSI sequence support seems pretty common - a good starting point at least).
The commands depends on the terminal you are using, or these days of course the terminal emulator.
Back in the day there were physical boxes with names such as "VT-100" or "Ontel".
Each implemented whatever set of escape sequence commands they chose.
Lately of course we only use emulators. Nearly every sort of command line type interface operates in a text-window that emulates something or other.
Curses is a library that allowes your average programmer to write code to manipulate the terminal without having to know how to code for each of the many difference terminals out there. Kind like printer drivers let you print without having to know the details of any particular printer.
First you need to find out what kind of terminal you are using.
Then you can look up the specific commands.
One possible answer is here.
"ANSI" is a common one, typical of MSDOS.
Or, use curses and be happy for it :-)

Function point to kloc ratio as a software metric... the "Name That Tune" metric?

What do you think of using a metric of function point to lines of code as a metric?
It makes me think of the old game show "Name That Tune". "I can name that tune in three notes!" I can write that functionality in 0.1 klocs! Is this useful?
It would certainly seem to promote library usage, but is that what you want?
I think it's a terrible idea. Just as bad as paying programmers by lines of code that they write.
In general, I prefer concise code over verbose code, but only as long as it still expresses the programmers' intention clearly. Maximizing function points per kloc is going to encourage everyone to write their code as briefly as they possibly can, which goes beyond concise and into cryptic. It will also encourage people to join adjacent lines of code into one line, even if said joining would not otherwise be desirable, just to reduce the number of lines of code. The maximum allowed line length would also become an issue.
KLOC is tolerable if you strictly enforce code standards, kind of like using page requirements for a report: no putting five statements on a single line or removing most of the whitespace from your code.
I guess one way you could decide how effective it is for your environment is to look at several different applications and modules, get a rough estimate of the quality of the code, and compare that to the size of the code. If you can demonstrate that code quality is consistent within your organization, then KLOC isn't a bad metric.
In some ways, you'll face the same battle with any similar metric. If you count feature or function points, or simply features or modules, you'll still want to weight them in some fashion. Ultimately, you'll need some sort of subjective supplement to the objective data you'll collect.
"What do you think of using a metric of function point to lines of code as a metric?"
Don't get the question. The above ratio is -- for a given language and team -- a simple statistical fact. And it tends toward a mean value with a small standard deviation.
There are lots of degrees of freedom: how you count function points, what language you're using, how (collectively) clever the team is. If you don't change those things, the value stays steady.
After a few projects together, you have a solid expectation that 1200 function points will be 12,000 lines of code in your preferred language/framework/team organization.
KSloc / FP is a bare statistical observation. Clearly, there's something else about this that's bothering you. Could you be more specific in your question?
The metric of Function Points to Lines of Code is actually used to generate the language level charts (actually, it is Function Points to Statements) to give an approximate sense of how powerful a programming language is. Here is an example: http://web.cecs.pdx.edu/~timm/dm/functionpoints.html
I wouldn't recommend using that ratio for anything else, except high level approximations like the language level chart.
Promoting library usage is a good thing, but the other thing to keep in mind is you will lose in the ratio when you are building the libraries, and will only pay it off with dividends of savings over time. Bean-counters won't understand that.
I personally would like to see a Function point to ABC metric ratio -- as I am curious about how the ABC metric (which indicates size and includes complexity as part of the info) would relate - perhaps linear, perhaps exponential, etc... www.softwarerenovation.com/ABCMetric.pdf
All metrics suck. My theory has always been that if you have to have them, then use the easiest thing you can to gather them and be done with it and onto important things.
That generally means something along the lines of
grep -c ";" *.h *.cpp | awk -F: '/:/ {x += $2} END {print x}'
If you are looking for a "metric" to track code efficency, don't. If you insist, again try something stupid but easy like source file size (see grep command above, w/o the awk pipe) or McCabe (with a counter program).

Resources