tbb - concurrent_vector address to memory? - vector

I'm trying to get a reference to the memory of a concurrent_vector in TBB
(Threaded Building Blocks) in a way similar to std::vector.
So an std::vector would be accessed like: &stdVector[0].
But the equivalent for a concurrent_vector doesn't work: &tbbVector[0].
I guess this may have something to do with how the memory is stored internally
in order to be concurrent, but is there a way to do this?

unlike std::vector, concurrent_vector does not provide a guarantee of contiguous storage. So taking the address of the first element and doing anything other than accessing the first element is not a good idea.

Related

Is Ada.Containers.Functional_Maps usable in Ada2012?

The information about Ada.Containers.Functional_Maps in the GNAT documentation is quite—let's say—abstruse.
First, it says this:
…these containers can still be used safely.
In the second paragraph, it seems to me that you cannot free the memory allocated for those objects once the program exits the context where they are created. I am understanding that you could run into a memory leak. Am I right?
They are also memory consuming, as the allocated memory is not reclaimed when the container is no longer referenced.
Read the next two sentences in the doc:
Thus, they should in general be used in ghost code and annotations, so that they can be removed from the final executable. The specification of this unit is compatible with SPARK 2014.
Because the specification of Ada.Containers.Functional_Maps is compatible with SPARK, it may help to examine it in the context of related SPARK Libraries with regard to proof, testing and annotation. In particular,
The functional maps, sets and vectors are unbounded collections of indefinite elements that are neither controlled nor limited. While they are inefficient with regard to memory, they are simple, immutable and useful "to model user defined data structures."
The functional containers can be used in Ghost Code, "parts of the code that are only meant for specification and verification", as suggested here. This related example illustrates a ghost function.
it seems to me that you cannot free the memory allocated for those
objects once the program exits the context where they are created. I
am understanding that you could run into a memory leak. Am I right?
There are some things that you can do in Ada to manage memory, I would be surprised if (for example) the usage of an instance inside a declare-block were not cleaned-up on the block's exit. — This is, in fact, how some surprisingly robust applications can get away without "dynamically-allocated" memory/values (it's actually heap-allocated, but that's pedantic).
This sort of granular control is really nice, as you can constrain things/usages to specific points. Combined with Ada's good facilities for presenting interfaces, this means that changing some structure to another can be less-painful than it otherwise might be.
As an example of the above, I had a nested key-value map (a JSON object) that was being used to pass parameters around; the method for doing this changed and so I had a string of values (with common-rooted keys) coming in and a procedure that took JSON as input. Obviously what was needed was a "keys&values-to-JSON function, so inside the function I used the multiway-tree container where the leafs represented values and the internal-nodes the keys, the second step was to traverse the tree and create the JSON-object as needed - simple recursion and data-structure selection used to address the problem of adapting the textual key-value pairs of these nested parameters to JSON. — And because the usage of multi-way trees was exclusive to this function, I can be confident that the memory used by the intermediate tree-object I used is released on the function's exit.

Why should I use a pointer ( performance)?

I'm wondering if there is any perf benchmark on raw objects vs pointers to objects.
I'm aware that it doesn't make sense to use pointers on reference types (e.g. maps) so please don't mention it.
I'm aware that you "must" use pointers if the data needs to be updated so please don't mention it.
Most of the answers/ docs that I've found basically rephrase the guidelines from the official documentation:
... If the receiver is large, a big struct for instance, it will be much cheaper to use a pointer receiver.
My question is simply what means "large" / "big"? Is a pointer on a string overkill ? what about a struct with two strings, what about a struct 3 string fields??
I think we deal with this use case quite often so it's a fair question to ask. Some advise to don't mind the performance issue but maybe some people want to use the right notation whenever they have to chance even if the performance gain is not signifiant. After all a pointer is not that expensive (i.e. one additional keystroke).
An example where it doesn't make sense to use a pointer is for reference types (slices, maps, and channels)
As mentioned in this thread:
The concept of a reference just means something that serves the purpose of referring you to something. It's not magical.
A pointer is a simple reference that tells you where to look.
A slice tells you where to start looking and how far.
Maps and channels also just tell you where to look, but the data they reference and the operations they support on it are more complex.
The point is that all the actually data is stored indirectly and all you're holding is information on how to access it.
As a result, in many cases you don't need to add another layer of indirection, unless you want a double indirection for some reason.
As twotwotwo details in "Pointers vs. values in parameters and return values", strings, interface values, and function values are also implemented with pointers.
As a consequence, you would rarely need a to use a pointer on those objects.
To quote the official golang documentation
...the consideration of efficiency. If the receiver is large, a big struct for instance, it will be much cheaper to use a pointer receiver.
It's very hard to give you exact conditions since there can be different performance goals. As a rule of thumb, by default, all objects larger than 128 bits should be passed by pointer. Possible exceptions of the rule:
you are writing latency sensitive server, so you want to minimise garbage collection pressure. In order to achieve that your Request struct has byte[8] field instead of pointer to Data struct which holds byte[8]. One allocation instead of two.
algorithm you are writing is more readable when you pass the struct and make a copy
etc.

What can pointers do that are otherwise impossible to implement?

So I'm having a hard time grasping the idea behind pointers and all that memory allocation.
I'm thinking nowadays with computer as powerful as they are right now why do we have to use pointers at all?
Isn't there always a workaround to do things without the help of pointers?
Pointers are an indirection: instead of working with the data itself, you are working with (something) that points to the data. Depending on the semantics of the language, this allows many things: cheaply switch to another instance of data (by setting the pointer to point to another instance), passing pointers allows access to the original data without having to make (a possibly expensive) copy, etc.
Memory allocation is related to pointers, but separate: you can have pointers without allocating memory. The reason you need pointers for memory allocation is that the actual address the allocated block of memory resides is not known at compile time, so you can only access it via a level of indirection (i.e. pointers) -- the compiler statically allocates space for the pointer that will point to the dynamically allocated memory.
Pointers are incredibly powerful. Just because computers have a faster processing time nowdays, doesn't mean that's any reason to abandon something as essential as pointers. Passing around giant chunks of memory on the stack is inefficient at best, catastrophic at worst. With pointers, you only need to maintain a reference to where the data resides, rather than duplicating huge chunks of memory each time you call a function.
Also, if you're copying all the data every time, how do you modify the original data? Aside from returning the copy of the structure in every call that touches it.
I remember reading somewhere that Dijkstra was assessing a student for a programming course; this student was quite intelligent but s/he wasn't able to solve the problem because there was sort of a mental block.
All the code was sort of ok, but what was needed was simply to use the expression
a[a[i+1]] = j;
and even if being so close to the solution still the goal seemed to be miles away.
Languages "without pointers" already exist... e.g. BASIC. Without explicit pointers, that is. But the indirection idea... the idea that you can have data to mean just where to find other data is central to programming.
The very idea of array is about being able to use computed values to find other values.
Trying to hide this idea is an horrible plan. According to Dijkstra anyone that has been exposed to the BASIC language has already received such a mental mutilation that is impossible to recover as a good programmer (and probably the absence of explicit indirection was one of the problems).
I think he was exaggerating.
Just a bit.

index , pointer , iterator , which is better ? when it comes to vectors

hi i'm doing a program that needs high performance of handling vector elements
vector<Class_A> object ;
1- which one is fastest to access the elements
2- which is more code simpler and not complex to deal with
index ? iterator ? pointer ?
An iterator or pointer will have the same performance on most implementations -- usually a vector iterator is a pointer. An index needs to calculate a pointer each time, but the optimizer can sometimes take care of that. Generally though, as another commenter said, there's no sense in optimizing this for performance.
All of that said, I would probably go with an iterator since it's easier to change to another type of container if need be.
Assuming you have inlining enabled, and arent doing index range checking, they will probably all be about the same. Besides micro optimizing this probably isn't going to gain you anything, you need to optimize globally. profile your process and target the slowest parts first

ASP.NET How expensive is it to call an Application Variable many times?

The short of it is: Is it costly to check an Application Variable such as Application("WebAppName") more 10-20 times each time a page loads?
Background: (feel free to critique)
Some includes in my site contain many links and images which cannot use relative urls due to their inclusion in different paths.
Hence these includes contain frequent instances of
<img src="<%=Application("Webroot")%>images\image.gif">
Is it expensive to keep calling an Application variable like this?
Should I just put the Application value in some local variable to use where needed?
IMPORTANT NOTE:
I need my webapp to run fine on a server whether it be in the root web ("/") or in a virtual subweb ("/app").
Thanks in advance for any wisdom shared.
It's cheap - very, very cheap - just a dictionary lookup. Compared with almost anything else you'll do in the app (loading something from disk or the network) this will be statistical noise.
In general though, the best thing to do if you're worried about things like this is to measure it. Arbitrarily put 10,000 calls into a page, and see how that affects performance. See how it affects concurrency as well - can you still get the throughput you need when processing multiple concurrent requests?
Just for info, another option is:
<img src="<%=VirtualPathUtility.ToAbsolute("~/images/image.gif")%>"
This works well especially in MVC, where you might write an extension method to do the job, i.e.
<%=Html.Image("~/images/image.gif")%>
The Application object is a synchronized collection which uses ReadWriteObjectLock (an internal class that just uses the lock keyword), so if you are only reading from the collection it will be as fast as a hash table lookup as Jon mentioned, but if at the same time someone is writing to this collection, readers will block until write is complete. If you are worried so much about performance, call the indexer once, store it to a local variable and use this variable in your views.
Use Request.ApplicationPath instead (only works if your app is set as a virtual directory in IIS)
Short answer - measure it and decide on your own environment. I would say it does not matter.
Longer answer - you should have the call wrapped in something anyway... Like WebConfiguration.Root.
That will give you the option to do whatever optimization to it anytime in the future.

Resources