What does this line do in jdk 1.8 version?
import java.util.Vector;//is this still used
static Vector<Boolean>isprime = new Vector<>(1000001);
I wrote it outside main inside class and when called its size it showed 0. Doesn't it be a vector of 1000001 elements each initialised with true by default.
No.
First of all, Vector is obsolete and you should use ArrayList.
Secondly, the argument you pass is not the initial size of the list, but it's capacity, which represents the amount of items you can add before it needs to resize its internal storage.
Related
I'm currently using SQSLP, and defining my design variables like so:
p.model.add_design_var('indeps.upperWeights', lower=np.array([1E-3, 1E-3, 1E-3]))
p.model.add_design_var('indeps.lowerWeights', upper=np.array([-1E-3, -1E-3, -1E-3]))
p.model.add_constraint('cl', equals=1)
p.model.add_objective('cd')
p.driver = om.ScipyOptimizeDriver()
However, it insists on trying [1, 1, 1] for both variables. I can't override with val=[...] in the component because of how the program is structured.
Is it possible to get the optimiser to accept some initial values instead of trying to set anything without a default value to 1?
By default, OpenMDAO initializes variables to a value of 1.0 (this tends to avoid unintentional divide-by-zero if things were initialized to zero).
Specifying shape=... on input or output results in the variable values being populated by 1.0
Specifying val=... uses the given value as the default value.
But that's only the default values. Typically, when you run an optimization, you need to specify initial values of the variables for the given problem at hand. This is done after setup, through the problem object.
The set_val and get_val methods on problem allow the user to convert units. (using Newtons here for example)
p.set_val('indeps.upperWeights', np.array([1E-3, 1E-3, 1E-3]), units='N')
p.set_val('indeps.upperWeights', np.array([-1E-3, -1E-3, -1E-3]), units='N')
There's a corresponding get_val method to retrieve values in the desired units after optimization.
You can also access the problem object as though it were a dictionary, although doing so removes the ability to specify units (you get the variable values in its native units).
p['indeps.upperWeights'] = np.array([1E-3, 1E-3, 1E-3])
p['indeps.upperWeights'] = np.array([-1E-3, -1E-3, -1E-3])
I'm working on a practice program for doing belief propagation stereo vision. The relevant aspect of that here is that I have a fairly long array representing every pixel in an image, and want to carry out an operation on every second entry in the array at each iteration of a for loop - first one half of the entries, and then at the next iteration the other half (this comes from an optimisation described by Felzenswalb & Huttenlocher in their 2006 paper 'Efficient belief propagation for early vision'.) So, you could see it as having an outer for loop which runs a number of times, and for each iteration of that loop I iterate over half of the entries in the array.
I would like to parallelise the operation of iterating over the array like this, since I believe it would be thread-safe to do so, and of course potentially faster. The operation involved updates values inside the data structures representing the neighbouring pixels, which are not themselves used in a given iteration of the outer loop. Originally I just iterated over the entire array in one go, which meant that it was fairly trivial to carry this out - all I needed to do was put .Parallel between Array and .iteri. Changing to operating on every second array entry is trickier, however.
To make the change from simply iterating over every entry, I from Array.iteri (fun i p -> ... to using for i in startIndex..2..(ArrayLength - 1) do, where startIndex is either 1 or 0 depending on which one I used last (controlled by toggling a boolean). This means though that I can't simply use the really nice .Parallel to make things run in parallel.
I haven't been able to find anything specific about how to implement a parallel for loop in .NET which has a step size greater than 1. The best I could find was a paragraph in an old MSDN document on parallel programming in .NET, but that paragraph only makes a vague statement about transforming an index inside a loop body. I do not understand what is meant there.
I looked at Parallel.For and Parallel.ForEach, as well as creating a custom partitioner, but none of those seemed to include options for changing the step size.
The other option that occurred to me was to use a sequence expression such as
let getOddOrEvenArrayEntries myarray oddOrEven =
seq {
let startingIndex =
if oddOrEven then
1
else
0
for i in startingIndex..2..(Array.length myarray- 1) do
yield (i, myarray.[i])
}
and then using PSeq.iteri from ParallelSeq, but I'm not sure whether it will work correctly with .NET Core 2.2. (Note that, currently at least, I need to know the index of the given element in the array, as it is used as the index into another array during the processing).
How can I go about iterating over every second element of an array in parallel? I.e. iterating over an array using a step size greater than 1?
You could try PSeq.mapi which provides not only a sequence item as a parameter but also the index of an item.
Here's a small example
let res = nums
|> PSeq.mapi(fun index item -> if index % 2 = 0 then item else item + 1)
You can also have a look over this sampling snippet. Just be sure to substitute Seq with PSeq
What is the maximum number of elements that can be stored in a Map in GO? If I need to access data from Map frequently, is it a good idea to keep on adding items to Map and retrieving from it, in a long running program?
There is no theoretical limit to the number of elements in a map except the maximum value of the map-length type which is int. The max value of int depends on the target architecture you compile to, it may be 1 << 31 - 1 = 2147483647 in case of 32 bit, and 1 << 63 - 1 = 9223372036854775807 in case of 64 bit.
Note that as an implementation restriction you may not be able to add exactly max-int elements, but the order of magnitude will be the same.
Since the builtin map type uses a hashmap implementation, access time complexity is usually O(1), so it is perfectly fine to add many elements to a map, you can still access elements very fast. Note that however adding many elements will cause a rehashing and rebuilding the internals, which will require some additional calculations - which may happen occasionally when adding new keys to the map.
If you can "guess" or estimate the size of your map, you can create your map with a big capacity to avoid rehashing. E.g. you can create a map with space for a million elements like this:
m := make(map[string]int, 1e6)
"A maximum number"? Practically no.
"A good idea"? Measure, there cannot be a general answer.
I am developing a software which maps information in 3D space. I use a container to hold this information. The container which I use is
QList< QList< QMap<float, QList<quint16> > > > lidardata;
which basically is a 2D grid representing a rectangular area where each cell is 1 meter x 1 meter, and in each cell the QMap contains a key value representing height and a list of four related values at that height. This way I can store five values (height + other values). I insert values in a loop like this (rown and coln are row and column indexes respectively)
QList<quint16> NEWLIST;
NEWLIST << (width*100.0) << (1000.0+sens*100.0) << (quint16)(intensity*1000.0) ;
lidardata[ rown ][ coln ].insert( heightValue, NEWLIST);
Before this approach instead of using QMap<float, QList<quint16> I used QList<quint16> and just appending the 5 values.
Now the question is: running my program runs out of memory quite fast. It took up about 800Mb of memory to complete with the first solution (QList instead of QMap), now it runs out (at about 1.4 Gb) at 75% of the total data-storing process.
Can someone confirm that storing the same amount of information using QMap<float, QList<quint16> instead of QList<quint16> does require a lot more space in memory?
Does anyone have any hints to limit space? I will go back to the old solution if nothing comes up.
As mentionned in comment:
your code may suffer from Primitive Obsession.
Try to solve your problem using the ValueObject fix stated in this tutorial : create a class with all needed attibutes, and work on instances of this class instead of maintaining nested Qlists and QMaps.
I assigned to a project written by someone else. They passed parameters as variables (I mean those things copied to stack when a method is called) and I like them to converted to pointers. It runs significantly faster because only 32-bit or 64-bit pointers are passed to the subroutines. I have almost 600 methods to be converted.
An example method is defined as:
bool insideWindow(tsPoint Point, tsWindow Window)
When I change the type tsWindow into psWindow (defined as *tsWindow) I need to change all dots (.) to (->) in order to imply a pointer operation.
Is there any easy way to change these in QtCreator? To put in another way, I want to change the type to a pointer type and QtCreator will easily change dots into -> ?
Thanks
Well, it is easily solved by passing variables as references. All I need to do is to modify the function prototype (both in h and in cpp files).
bool insideWindow(tsPoint &Point, tsWindow &Window)
This way it still needs a dot (means I won't change the code, replacing dots with -> operators) and they are passed as pointers in fact.
http://www.cprogramming.com/tutorial/references.html