c++ prefix and postfix expression - postfix-mta

#include<iostream>
using namespace std;
class Mybase
{
public:
Mybase(int value):my_base(value){ }
Mybase& operator++()
{
Mybase tmp(*this);
++my_base;
return tmp;
}
Mybase operator++(int)
{
++my_base;
return *this;
}
private:
int my_base;
};
Prefix expression is correct, or postfix expression is correct?

Either prefix or postfix can usually be used. Sometimes it makes a difference, though. Standard style is to prefer the prefix form, as ++i, when it does not make a difference. (This is hard to believe, since there exists a famous language called C++ but none called ++C, but there it is.)
The reason to prefer the prefix form has to do with stack mechanics. Consider a statement like
int n = 2 * (++i - 5);
Here, the variable i is incremented then used at it stands, its old value immediately forgotten. Consider by contrast
int n = 2 * ((i++) - 5);
Here, the value of the variable i must be copied as a temporary to the stack, after which i is separately incremented. The processor touches i more in this case.
Admittedly, an optimizing compiler might be able to generate equally efficient code in both cases, but the ++i is inherently cleaner, because the old value of i becomes instantly irrelevant, as soon as i is updated, after which there is nothing to optimize. Since there is often no other cause to choose between the two forms, the ++i is usually preferred.

You got it the other way around.
First operator definition has the signature of the prefix operator but implemented the postfix operator, and the second definition has the signature of postfix but has the body of prefix.
Just swap the body of two methods and they should be fine.

Related

QThreadStorage vs C++11 thread_local

As the title suggests: What are the differences. I know that QThreadStorage existed long before the thread_local keyword, which is probably why it exists in the first place - but the question remains: Is there any significant difference in what they do between those two (except for the extended API of QThreadStorage).
Im am assuming a normal use case of a static variable - as using QThreadStorage as non static is possible, but not recommended to do.
static thread_local int a;
// vs
static QThreadStorage<int> b;
Well, since Qt is open-source you can basically figure out the answers from the Qt sources. I am currently looking at 5.9 and here are some things, that I'd classify as significant:
1) Looking at qthreadstorage.cpp, the get/set methods both have this block in the beginning:
QThreadData *data = QThreadData::current();
if (!data) {
qWarning("QThreadStorage::set: QThreadStorage can only be used with threads started with QThread");
return 0;
}
So (an maybe this has changed/will change) you can't mix QThreadStorage with anything else than QThread, while the thread_local keyword does not have similar restrictions.
2) Quoting cppreference on thread_local:
thread storage duration. The object is allocated when the thread begins and deallocated when the thread ends.
Looking at qthreadstorage.cpp, the QThreadStorage class actually does not contain any storage for T, the actual storage is allocated upon the first get call:
QVector<void *> &tls = data->tls;
if (tls.size() <= id)
tls.resize(id + 1);
void **v = &tls[id];
So this means that the lifetime is quite different between these two - with QThreadStorage you actually have more control, since you create the object and set it with the setter, or default initialize with the getter. E.g. if the ctor for the type could throw, with QThreadStorage you could catch that, with thread_local I am not even sure.
I assume these two are significant enough not to go any deeper.

Do you make safe and unsafe version of your functions or just stick to the safe version? (Embedded System)

let's say you have a function that set an index and then update few variables based on the value stored in the array element which the index is pointing to. Do you check the index to make sure it is in range? (In embedded system environment to be specific Arduino)
So far I have made a safe and unsafe version for all functions, is that a good idea? In some of my other codes I noticed that having only safe functions result in checking conditions multiple time as the libraries get larger, so I started to develop both. The safe function checks the condition and call the unsafe function as shown in example below for the case explained above.
Safe version:
bool RcChannelModule::setFactorIndexAndUpdateBoundaries(factorIndex_T factorIndex)
{
if(factorIndex < N_FACTORS)
{
setFactorIndexAndUpdateBoundariesUnsafe(factorIndex);
return true;
}
return false;
}
Unsafe version:
void RcChannelModule::setFactorIndexAndUpdateBoundariesUnsafe(factorIndex_T factorIndex)
{
setCuurentFactorIndexUnsafe(factorIndex);
updateOutputBoundaries();
}
If I am doing it wrong fundamentally please let me know why and how I could avoid that. Also I would like to know, generally when you program, do you consider the future user to be a fool or you expect them to follow the minimal documentation provided? (the reason I say minimal is because I do not have the time to write a proper documentation)
void RcChannelModule::setCuurentFactorIndexUnsafe(const factorIndex_T factorIndex)
{
currentFactorIndex_ = factorIndex;
}
Safety checks, such as array index range checks, null checks, and so on, are intended to catch programming errors. When these checks fail, there is no graceful recovery: the best the program can do is to log what happened, and restart.
Therefore, the only time when these checks become useful is during debugging and testing of your code. C++ provides built-in functionality for dealing with this through asserts, which are kept in the debug versions of the code, but compiled out from the release version:
void RcChannelModule::setFactorIndexAndUpdateBoundariesUnsafe(factorIndex_T factorIndex) {
assert(factorIndex < N_FACTORS);
setCuurentFactorIndexUnsafe(factorIndex);
updateOutputBoundaries();
}
Note: [When you make a library for external use] an argument-checking version of each external function perhaps makes sense, with non-argument-checking implementations of those and all internal-only functions. If you perform argument checking then do it (only) at the boundary between your library and the client code. But it's pointless to offer a choice to your users, for if you want to protect them from usage errors then you cannot rely on them to choose the "safe" versions of your functions. (John Bollinger)
Do you make safe and unsafe version of your functions or just stick to the safe version?
For higher level code, I recommend one version, a safe one.
High level code, with a large set of related functions and data, the combinations of interactions of data and code are not possible to fully check at development time. When an error is detected, the data should be set to indicate an error state. Subsequent use of data within these functions would be aware of the error state.
For lowest level -time critical routines, I'd go with #dasblinkenlight answer. Create one source code that compiles 2 ways per the debug and release compiles.
Yet keep in mind #pete becker, it this really likely a performance bottle neck to do a check?
With floating-point related routines, use the NaN to help keep track of an unrecoverable error.
Lastly, as able, create functions that do not fail and avoid the issue. With many, not all, this only requires small code additions. It often only adds a constant of time performance penalty and not a O(n) penalty.
Example: Consider a function to lop off the first character of a string - in place.
// This work fine as long as s[0] != 0
char *slop_1(char *s) {
size_t len = strlen(s); // most work is here
return memmove(s, s + 1, len); // and here
}
Instead define the function, and code it, to do nothing when s[0] == 0
char *slop_2(char *s) {
size_t len = strlen(s);
if (len > 0) { // negligible additional work
memmove(s, s + 1, len);
}
return s;
}
Similar code can be applied to OP's example. Note that it is "safe", at least within the function. The assert() scheme can still be used to discovery development issues. Yet the released code, without the assert(), still checks the range.
void RcChannelModule::setFactorIndexAndUpdateBoundaries(factorIndex_T factorIndex)
{
if(factorIndex < N_FACTORS) {
setFactorIndexAndUpdateBoundariesUnsafe(factorIndex);
} else {
assert(1);
}
}
Since you tagged this Arduino and embedded, you have a very resource-constrained system, one of the crappiest processors still manufactured.
On such a system you cannot afford extra error handling. It is better to properly document what values the parameters passed to the function must have, then leave the checking of this to the caller.
The caller can then either check this in run-time, if needed, or otherwise in compile-time with a static assert. Your function would however not be able to implement it as a static assert, as it can't know if factorIndex is a run-time variable or a compile-time constant.
As for "I have no time to write proper documentation", that's nonsense. It takes far less time to document this function than to post this SO question. You don't necessarily have to write an essay in some Word file. You don't necessarily have to use Doxygen or similar.
But you do need to write the bare minimum of documentation: In the header file, document the purpose and expected values of all function parameters in the form of comments. Preferably you should have a coding standard for how to document such functions. A minimal documentation of public API functions in the form of comments is part of your job as programmer. The code is not complete until this is written.

What's the point of unique_ptr?

Isn't a unique_ptr essentially the same as a direct instance of the object? I mean, there are a few differences with dynamic inheritance, and performance, but is that all unique_ptr does?
Consider this code to see what I mean. Isn't this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
unique_ptr<int> a(new int);
print(*a);
return 0;
}
Almost exactly the same as this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
int a;
print(a);
return 0;
}
Or am I misunderstanding what unique_ptr should be used for?
In addition to cases mentioned by Chris Pitman, one more case you will want to use std::unique_ptr is if you instantiate sufficiently large objects, then it makes sense to do it in the heap, rather than on a stack. The stack size is not unlimited and sooner or later you might run into stack overflow. That is where std::unique_ptr would be useful.
The purpose of std::unique_ptr is to provide automatic and exception-safe deallocation of dynamically allocated memory (unlike a raw pointer that must be explicitly deleted in order to be freed and that is easy to inadvertently not get freed in the case of interleaved exceptions).
Your question, though, is more about the value of pointers in general than about std::unique_ptr specifically. For simple builtin types like int, there generally is very little reason to use a pointer rather than simply passing or storing the object by value. However, there are three cases where pointers are necessary or useful:
Representing a separate "not set" or "invalid" value.
Allowing modification.
Allowing for different polymorphic runtime types.
Invalid or not set
A pointer supports an additional nullptr value indicating that the pointer has not been set. For example, if you want to support all values of a given type (e.g. the entire range of integers) but also represent the notion that the user never input a value in the interface, that would be a case for using a std::unique_ptr<int>, because you could get whether the pointer is null or not as a way of indicating whether it was set (without having to throw away a valid value of integer just to use that specific value as an invalid, "sentinel" value denoting that it wasn't set).
Allowing modification
This can also be accomplished with references rather than pointers, but pointers are one way of doing this. If you use a regular value, then you are dealing with a copy of the original, and any modifications only affect that copy. If you use a pointer or a reference, you can make your modifications seen to the owner of the original instance. With a unique pointer, you can additionally be assured that no one else has a copy, so it is safe to modify without locking.
Polymorphic types
This can likewise be done with references, not just with pointers, but there are cases where due to semantics of ownership or allocation, you would want to use a pointer to do this... When it comes to user-defined types, it is possible to create a hierarchical "inheritance" relationship. If you want your code to operate on all variations of a given type, then you would need to use a pointer or reference to the base type. A common reason to use std::unique_ptr<> for something like this would be if the object is constructed through a factory where the class you are defining maintains ownership of the constructed object. For example:
class Airline {
public:
Airline(const AirplaneFactory& factory);
// ...
private:
// ...
void AddAirplaneToInventory();
// Can create many different type of airplanes, such as
// a Boeing747 or an Airbus320
const AirplaneFactory& airplane_factory_;
std::vector<std::unique_ptr<Airplane>> airplanes_;
};
// ...
void Airline::AddAirplaneToInventory() {
airplanes_.push_back(airplane_factory_.Create());
}
As you mentioned, virtual classes are one use case. Beyond that, here are two others:
Optional instances of objects. My class may delay instantiating an instance of the object. To do so, I need to use memory allocation but still want the benefits of RAII.
Integrating with C libraries or other libraries that love returning naked pointers. For example, OpenSSL returns pointers from many (poorly documented) methods, some of which you need to cleanup. Having a non-copyable pointer container is perfect for this case, since I can protect it as soon as it is returned.
A unique_ptr functions the same as a normal pointer except that you do not have to remember to free it (in fact it is simply a wrapper around a pointer). After you allocate the memory, you do not have to afterwards call delete on the pointer since the destructor on unique_ptr takes care of this for you.
Two things come to my mind:
You can use it as a generic exception-safe RAII wrapper. Any resource that has a "close" function can be wrapped with unique_ptr easily by using a custom deleter.
There are also times you might have to move a pointer around without knowing its lifetime explicitly. If the only constraint you know is uniqueness, then unique_ptr is an easy solution. You could almost always do manual memory management also in that case, but it is not automatically exception safe and you could forget to delete. Or the position you have to delete in your code could change. The unique_ptr solution could easily be more maintainable.

Assigning block pointers: differences between Objective-C vs C++ classes

I’ve found that assigning blocks behaves differently with respect to Objective-C class parameters and C++ classes parameters.
Imagine I have this simple Objective-C class hierarchy:
#interface Fruit : NSObject
#end
#interface Apple : Fruit
#end
Then I can write stuff like this:
Fruit *(^getFruit)();
Apple *(^getApple)();
getFruit = getApple;
This means that, with respect to Objective-C classes, blocks are covariant in their return type: a block which returns something more specific can be seen as a “subclass” of a block returning something more general. Here, the getApple block, which delivers an apple, can be safely assigned to the getFruit block. Indeed, if used later, it's always save to receive an Apple * when you're expecting a Fruit *. And, logically, the converse does not work: getApple = getFruit; doesn't compile, because when we really want an apple, we're not happy getting just a fruit.
Similarly, I can write this:
void (^eatFruit)(Fruit *);
void (^eatApple)(Apple *);
eatApple = eatFruit;
This shows that blocks are covariant in their argument types: a block that can process an argument that is more general can be used where a block that processes an argument that is more specific is needed. If a block knows how to eat a fruit, it will know how to eat an apple as well. Again, the converse is not true, and this will not compile: eatFruit = eatApple;.
This is all good and well — in Objective-C. Now let's try that in C++ or Objective-C++, supposing we have these similar C++ classes:
class FruitCpp {};
class AppleCpp : public FruitCpp {};
class OrangeCpp : public FruitCpp {};
Sadly, these block assignments don't compile any more:
FruitCpp *(^getFruitCpp)();
AppleCpp *(^getAppleCpp)();
getFruitCpp = getAppleCpp; // error!
void (^eatFruitCpp)(FruitCpp *);
void (^eatAppleCpp)(AppleCpp *);
eatAppleCpp = eatFruitCpp; // error!
Clang complains with an “assigning from incompatible type” error. So, with respect to C++ classes, blocks appear to be invariant in the return type and parameter types.
Why is that? Doesn't the same argument I made with Objective-C classes also hold for C++ classes? What am I missing?
This distinction is intentional, due to the differences between the Objective-C and C++ object models. In particular, given a pointer to an Objective-C object, one can convert/cast that pointer to point at a base class or a derived class without actually changing the value of the pointer: the address of the object is the same regardless.
Because C++ allows multiple and virtual inheritance, this is not the case for C++ objects: if I have a pointer to a C++ class and I cast/convert that pointer to point at a base class or a derived class, I may have to adjust the value of the pointer. For example, consider:
class A { int x; }
class B { int y; }
class C : public A, public B { }
B *getC() {
C *c = new C;
return c;
}
Let's say that the new C object in getC() gets allocated at address 0x10. The value of the pointer 'c' is 0x10. In the return statement, that pointer to C needs to be adjusted to point at the B subobject within C. Because B comes after A in C's inheritance list, it will (generally) be laid out in memory after A, so this means adding an offset of 4 bytes (
== sizeof(A)) to the pointer, so the returned pointer will be 0x14. Similarly, casting a B* to a C* would subtract 4 bytes from the pointer, to account for B's offset within C. When dealing with virtual base classes, the idea is the same but the offsets are no longer known, compile-time constants: they're accessed through the vtable during execution.
Now, consider the effect this has on an assignment like:
C (^getC)();
B (^getB)();
getB = getC;
The getC block returns a pointer to a C. To turn it into a block that returns a pointer to a B, we would need to adjust the pointer returned from each invocation of the block by adding 4 bytes. This isn't an adjustment to the block; it's an adjustment to the pointer value returned by the block. One could implement this by synthesizing a new block that wraps the previous block and performs the adjustment, e.g.,
getB = ^B() { return getC() }
This is implementable in the compiler, which already introduces similar "thunks" when overriding a virtual function with one that has a covariant return type needing adjustment. However, with blocks it causes an additional problem: blocks allow equality comparison with ==, so to evaluate whether "getB == getC", we would have to be able to look through the thunk that would be generated by the assignment "getB = getC" to compare the underlying block pointers. Again, this is implementable, but would require a much more heavyweight blocks runtime that is able to create (uniqued) thunks able to perform these adjustments to the return value (and as well as for any contravariant parameters). While all of this is technically possible, the cost (in runtime size, complexity, and execution time) outweighs the benefits.
Getting back to Objective-C, the single-inheritance object model never needs any adjustments to the object pointer: there's only a single address to point at a given Objective-C object, regardless of the static type of the pointer, so covariance/contravariance never requires any thunks, and the block assignment is a simple pointer assignment (+ _Block_copy/_Block_release under ARC).
the feature was probably overlooked. There are commits that show Clang people caring about making covariance and contravariance work in Objective-C++ for Objective-C types but I couldn't find anything for C++ itself. The language specification for blocks doesn't mention covariance or contravariance for either C++ or Objective-C.

Passing auto typed vars to function in D?

This doesn't work in D:
void doSomething(auto a, auto b){
// ...
}
I'm just curious, will this ever work? Or is this just technically impossible? (Or just plain stupid?)
In anyway, can this be accomplished in any other way? I suppose I could use the ... and look through the argument list, but I'm kinda making a library for lazy newbie people and want them to be able to create functions easily without really caring about data types. I'm playing with the idea of creating a struct called var like
struct var{
byte type;
void* data
// ...
}
// and overload like all operators so that a lazy devver can do
var doSomething(var a, var b){
if(a == "hello")
b = 8;
var c = "No.:" ~ b ~ " says:" ~ a;
return c;
}
But my head is already starting to hurt right there. And, I'm kinda feeling I'm missing something. I'm also painfully aware that this is probably what templates are for... Are they? From the little I know, a template would look like this (?)
void doSomething(T, U)( T a, U b){
// ...
}
But now it doesn't look so clean anymore. Maybe I'm getting all this backwards. Maybe my confusion stems from my belief that auto is a dynamic type, comparable to var i javascript, but when in reality, it's something else?
And if it isn't a dynamic type, and this is probably a whole other topic, is it possible to create one? Or is there maybe even an open source lib available? A liblazy maybe?
(PS. Yeah, maybe the lazy devver is me : )
If doSomething is generic over any type of a and b, then the template version is the right thing to do:
void doSomething(T, U)(T a, U b) {
// etc.
}
The compiler will detect the types represented by T and U at compile time and generate the proper code.
I'll just add that something close to your idea of runtime type polymorphism (the var structure) is already implemented in the std.variant module (D2 only).
Also, technically auto is really a keyword that does nothing - it's used where you need a type modifier without a type. If you don't specify a type, D will infer it for you from the initialization expression. For example, all of these work:
auto i = 5;
const j = 5;
static k = 5;
Stop, don't do that! I fear that if you ignore types early on it will be harder to grasp them, especially since D is not dynamically typed. I believe types are important even in dynamic languages.
You can ignore the type in D2 by using std.variant and it can make the code look like the language is dynamically typed.
You are correct on you Template usage, but not auto. D allows you the use of this word in many circumstances, such as the return type, since D has a very good system for inferring a type. Parameters are something that doesn't come with duplicate information for the type to be inferred with.
Additional info on std.variant:
Looking into this more, this is still more complicated than what you desire. For example, to assign the value to a typed variable requires a method call. and you can't access class methods from the reference.
int b = a.get!(int);
a.goFish() // Error Variant type doesn't have goFish method
Maybe my confusion stems from my belief that auto is a dynamic type, comparable to var i javascript, but when in reality, it's something else?
auto is a storage class. In fact, it's the default storage class; it doesn't mean anything beyond 'this has a storage class'.
Now if the compiler can infer the type unambiguously, then everything is fine. Otherwise, it's an error:
auto i = 42; // An integer literal is inferred to be of type 'int'.
auto j; // Error!
So auto isn't a type at all. Remember that this all has to happen at compile time, not run time. So let's look at a function:
auto add(auto a, auto b)
{
return a + b;
}
Can the compiler automatically infer the types of a, b, or the return value? Nope. In addition to the primitives that work with the add operator (int, byte, double, and so on), user defined types could overload it, and so on. Because the compiler cannot unambiguously infer any of these types, it is an error. Types must be specified in parameters because in general, they cannot be inferred (unlike declarations, for example).

Resources