Bubblesort in LabVIEW formula node - formula

I'm trying to create a histogram of an image. I was thinking to first bubblesort the array of the pixels so every number is sorted from low to high.
Then its easier to count how many times a specific value of a pixels appears. And then later I can put it in a graph.
But it always gives an error then I don't understand.
I also want to make everything with the formula node instead of just blocks.
Visual:
http://i.stack.imgur.com/ZlmW2.png
Error:
http://i.stack.imgur.com/91TbS.png

In your code numbers is a scalar not an array.
Besides that the formula node does not maintain state, you'll need a feedback node to get history. Is there any reason why do you want to use the formula node instead of native LabVIEW code?

You need to remove the two nested LabVIEW for loops, you are iterating through your array inside the formula node so you don't need to do it with the loops.

Related

Can a QXYSeries* object have its points replaced with QQueue<QPointF>?

I have been following the QML Oscilloscope example to create a dynamically updated graph of points, so far I managed to update the points with a QVector<QPointF> points being passed into the replace() function on my QXYSeries * xySeries object:
xySeries->replace(points);
However, I would like to use a QQueue<QPointF> if possible, so I can update the values at the beginning of the queue and remove old ones from the end of the queue faster than then vector. I want to avoid using append()since it's less efficient.
Does anyone know if this is possible?

For memory, what should be done when you need to constantly grow a vector to an unknown upper limit?

Suppose that you are dealing with a potentially infinite amount of data. Suppose further that you do not have this data stored in memory, but can generate individual terms at will. Finally, suppose that you want to do some experiment on this data that will involve checking a large but unknown amount of terms in a way that necessitates keeping a great many of them in memory. Toy problems with Recamán's sequence, like "find the minimum number terms needed in that sequence for the first 25 even numbers to have appeared", are what I have in mind as typical examples.
The obvious solution to this sort of problem would be to write some code like:
list<-c(first term)
while([not found enough terms yet])
{
nextTerm<-Whatever
if(this term worked){list<-c(list,nextTerm)}
}
However, building a big vector like this by adding one new term at a time is your memory's worst nightmare. The alternative that I often see suggested is to pre-allocate a big vector in memory by making the first line of your code something like list<-numeric(10^6), but those solutions suppose that we have some rough idea of how many terms we need to check, which isn't always the case. So what can we do when we are dealing with an ever-growing list of unknown required length?
This is very popular subject in R check this answer: https://stackoverflow.com/a/45195098/5442527
Summing up:
Do not use c() to bind as providing value by index [ is much faster. I know that it might seem surprising that you could grow pre-allocated vector. Make an iter variable before while loop and increase the index inside the if statement.
Normally like in Python you do not have to care about it when using append. Even starting with empty list is not an problem as the list (reserved memory) grows expotentialy (x2x2x1.5x1.2...) when you pass some perimeter number of elements. Link Over-allocating

Constraints on BsplinesComp

I am using BsplinesComp for a sample problem.
The objective is to maximize the area under the line.
My problem arises when I want to set a constraint for one of the values in the output array that bspline gives. So a value such that the spline goes through that no matter what configuration it is in.
I tried this in two ways and I have uploaded the codes. They are both very badly coded so i think there is a neater way to do so. Links to codes:
https://gist.github.com/stackoverflow38/5eae1e86c5802a4df91becdf580d28c5
1- Using an extra explicit component in which the middle array value is imposed to be a selected value
2- Tried to use an execcomp but I get an error. Target shapes do not match.
I vaguely remember reading such a question but could not find it.
Overall I am trying to set a constraint for either the first, middle or last value of the bspline and some range that it should be in.
Similar to the plots here
So, I think you want to know the best way to do this, and the best way is to not use any extra components at all. You can directly constrain a single point in the output of the BsplinesComp by using the "indices" argument in the add_constraint call. Here, I constrain the first point in the spline to lie on the interval [-1, 1].
model.add_constraint('interp.h', lower=-1, upper=1, indices=[0])
Running the model gives me a shape that looks more like one of the ones you included.
Just for reference, for the errors you got with 1 and 2:
Not sure what is wrong here, but maybe the version you uploaded isn't the latest. You never used the AeraComp in a constraint, so it didn't do anything.
The exception was due to a size mismatch in connecting the vector output of the Bsplines comp to a scaler expression. You can do this by specifying the "src_indices", giving it a list of which indices in the array to connect to the target. model.connect('interp.h', 'execcomp.x', src_indices=[0])

Seemingly arbitrary slowdown with lapply

I'm building a process that uses the decodeLine function here to translate Encoded Polylines to Lat/Lon pairs. decodeLine works flawlessly on individual records, and I can use lapply to get it to chunk through my list (~150) of unique Polylines:
links.decoded <- lapply(as.list(links.encoded$EncodedPolyLine), FUN=decodeLine)
Here's where things get interesting: the lapply call is hanging on specific Polyline records. An individual decodeLine(X) call is almost instantaneous; calling it on most subsets from links.encoded is likewise very fast, but only until the subset contains one of a handful of records. Focusing in on one problematic Polyline value, the issue doesn't look to be related to the actual content of the value: passing it alone as the argument for decodeLine is as fast as any other valid call. So, the issue seems to be related to my use of lapply in this context.
Can anyone share insight? I'm sure I can hack a way around the issue, but I'm curious as to what's going on. Thanks!

Fortran90 created allocatable arrays but elements incorrect

Trying to create an array from an xyz data file. The data file is arranged so that x,y,z of each atom is on a new line and I want the array to reflect this.
Then to use this array to find find the distance from each atom in the list with all the others.
To do this the array has been copied such that atom1 & atom2 should be identical to the input file.
length is simply the number of atoms in the list.
The write statement: WRITE(20,'(3F12.9)') atom1 actually gives the matrix wanted but when I try to find individual elements they're all wrong!
Any help would be really appreciated!
Thanks guys.
DOUBLE PRECISION, DIMENSION(:,:), ALLOCATABLE ::atom1,atom2'
ALLOCATE(atom1(length,3),atom2(length,3))
READ(10,*) ((atom1(i,j), i=1,length), j=1,3)
atom2=atom1
distn=0
distc=0
DO n=1,length
x1=atom1(n,1)
y1=atom1(n,2) !1st atom
z1=atom1(n,3)
DO m=1,length
x2=atom2(m,1)
y2=atom2(m,2) !2nd atom
z2=atom2(m,3)`
Your READ statement reads all the x coordinates for all atoms from however many records, then all the y coordinates, then all the z coordinates. That's inconsistent with your description of the input file. You have the nesting of the io-implied-do's in the READ statement around the wrong way - it should be ((atom1(i,j),j=1,3),i=1,length).
Similarly, as per the comment, your diagnostic write mislead you - you were outputting all x ordinates, followed by all y ordinates, etc. Array element order of a whole array reference varies the first (leftmost) dimension fastest (colloquially known as column major order).
(There are various pitfalls associated with list directed formatting that mean I wouldn't recommend it for production code (or perhaps for input specifically written with the knowledge of and defence against those pitfalls). One of those pitfalls is that the READ under list directed formatting will pull in as many records as it requires to satisfy the input list. You may have detected the problem earlier if you were using an explicit format that nominated the number of fields per record.)

Resources