NSString *string = #"hello";
1) I keep reading that constant NSString does not get released, but this Apple page mentions:
the compiler makes such object
constants unique on a per-module
basis, and they’re never deallocated,
though you can retain and release them
as you do any other object.
http://developer.apple.com/mac/library/documentation/cocoa/conceptual/strings/Articles/CreatingStrings.html
2) If constant NSString does not get released, would it cause memory problems if used extensively? For example, is this a problem if repeated thousands of times:
NSString *string = #"One";
...
string = #"two";
...
string = #"three";
...
what's a good alternative?
Constant strings are part of you app's binary.
So, you do not need to worry about memory management, as they exist through all the execution and can not be released.
Related
So, I have been doing the Elm track on Exercism.org and I just finished the exercise about the Maybe concept, but one thing is not clear to me yet. What is the purpose of the Just in the definition of Maybe?
type Maybe a = Nothing | Just a
For example, what's the difference between Int and Just Int and why an integer is not considered a Just Int if I don't add the Just word before?
More concretely, when I was trying to solve the RPG problem my first trying resulted in something like this:
type alias Player =
{ name : Maybe String
, level : Int
, health : Int
, mana : Maybe Int
}
revive : Player -> Maybe Player
revive player =
case player.health of
0 ->
if player.level >= 10 then
Player player.name player.level 100 100
else
Player player.name player.level 100 Nothing
_ ->
Nothing
Just to find out that my mistake was in the if statement, that should return Just Person, i.e.:
if player.level >= 10 then
Just (Player player.name player.level 100 (Just 100))
else
Just (Player player.name player.level 100 Nothing)
If you're coming from a background of dynamic typing like Python then it's easy to see it as pointless. In Python, if you have an argument and you want it to be either an integer or empty, you pass either an integer or None. And everyone just understands that None is the absence of an integer.
Even if you're coming from a poorly-done statically typed language, you may still see it as odd. In Java, every reference datatype is nullable, so String is really "eh, there may or may not be a String here" and MyCustomClass is really "eh, there may or may not really be an instance here". Everything can be null, which results in everyone constantly checking whether things are null at every turn.
There are, broadly speaking, two solutions to this problem: nullable types and optional types. In a language like Kotlin with nullable types, Int is the type of integers. Int can only contain integers. Not null, not a string, not anything else. However, if you want to allow null, you use the type Int?. The type Int? is either an integer or a null value, and you can't do anything integer-like with it (such as add it to another integer) unless you check for null first. This is the most straightforward solution to the null problem, for people coming from a language like Java. In that analogy, Int really is a subtype of Int?, so every integer is an instance of Int?. 3 is an instance of both Int and Int?, and it means both "this is an integer" and also "this is an integer which is optional but exists".
That approach works fine in languages with subtyping. If your language is built up from a typical OOP hierarchy, it's easy to say "well, T is clearly a subtype of T?" and move on. But Elm isn't built that way. There's no subtyping relationships in Elm (there's unification, which is a different thing). Elm is based on Haskell, which is built up from the Hindley-Milner model. In this model, every value has a unique type.
Whereas in Kotlin, 3 is an instance of Int, and also Int?, and also Number, and also Number?, and so on all the way up to Any? (the top type in Kotlin), there is no equivalent in Elm. There is no "top type" that everything inherits from, and there is no subtyping. So it's not meaningful to say that 3 is an instance of multiple types. In Elm, 3 is an instance of Int. End of story. That's it. If a function takes an argument of type Int, it must be an integer. And since 3 can't be an instance of some other type, we need another way to represent "an integer that may or may not be there".
type Maybe a = Nothing | Just a
Enter optional typing. 3 can't be an optional integer, since it's an Int and nothing else. But Just 3, on the other hand... Just 3 is an entirely different value and its type is Maybe Int. A Just 3 is only valid in situations where an optional integer is expected, since it's not an Int. Maybe a is what's called an optional type; it's a completely separate type which represents the type a, but optional. It serves the same purpose and T? in a language like Kotlin, but it's built up from different foundations.
Getting into which one is better would derail this post, and I don't think that's important here. I have my opinions, but others have theirs as well. Optional typing and nullable typing are two different approaches to dealing with values that may or may not exist. Elm (and Haskell-like languages) use one, and other languages might use the other. A well-rounded programmer should become comfortable with both.
Why an integer is not considered a Just Int if I don't add the Just word before?
Simply because without the constructor (Just), it's only an integer and not something else. There's no automatic type conversion, you have to be explicit about what you want. Would you also consider allow writing 100 if you meant the single-element list [100]? Soon, you would have no idea what it meant if someone wrote 100.
This is not specific to Maybe and its Just variant, this is the rule for all data types. There is no exception for Maybes, even if the language is confusing - an Int is just an Int, but not a Just Int.
Just in Elm is a tag but in this context you can think of it like a function that takes a value of type Int, and return something of the type Maybe Int.
type Maybe a = Nothing | Just a
---
Just 123 -- is a `Maybe Int`
Means Maybe is a type with an associated generic type a. Similar to C++'s T
template <class T>
class Maybe
using MaybeInt = Maybe<Int>
All Nothing and Just a are are functions (aka constructors) to make a Maybe. In Python it might look like:
def Nothing() -> Maybe:
return Maybe() # except in elm, it knows the returned Maybe came from a
# Nothing, so there's some machinery missing here
def Just(some_val) -> Maybe:
return Maybe(some_val)
So if a function returns a Maybe, the returned value has to be passed through one of the two Nothing or Just constructors.
As the title suggests: What are the differences. I know that QThreadStorage existed long before the thread_local keyword, which is probably why it exists in the first place - but the question remains: Is there any significant difference in what they do between those two (except for the extended API of QThreadStorage).
Im am assuming a normal use case of a static variable - as using QThreadStorage as non static is possible, but not recommended to do.
static thread_local int a;
// vs
static QThreadStorage<int> b;
Well, since Qt is open-source you can basically figure out the answers from the Qt sources. I am currently looking at 5.9 and here are some things, that I'd classify as significant:
1) Looking at qthreadstorage.cpp, the get/set methods both have this block in the beginning:
QThreadData *data = QThreadData::current();
if (!data) {
qWarning("QThreadStorage::set: QThreadStorage can only be used with threads started with QThread");
return 0;
}
So (an maybe this has changed/will change) you can't mix QThreadStorage with anything else than QThread, while the thread_local keyword does not have similar restrictions.
2) Quoting cppreference on thread_local:
thread storage duration. The object is allocated when the thread begins and deallocated when the thread ends.
Looking at qthreadstorage.cpp, the QThreadStorage class actually does not contain any storage for T, the actual storage is allocated upon the first get call:
QVector<void *> &tls = data->tls;
if (tls.size() <= id)
tls.resize(id + 1);
void **v = &tls[id];
So this means that the lifetime is quite different between these two - with QThreadStorage you actually have more control, since you create the object and set it with the setter, or default initialize with the getter. E.g. if the ctor for the type could throw, with QThreadStorage you could catch that, with thread_local I am not even sure.
I assume these two are significant enough not to go any deeper.
NOTE: I am not asking about difference between pointer and reference, and for this question it is completely irrelevant.
One thing I couldn't find explicitly stated -- what model does Nim use?
Like C++ -- where you have values and with new you create pointers to data (in such case the variable could hold pointer to a pointer to a pointer to... to data)?
Or like C# -- where you have POD types as values, but user defined objects with referenced (implicitly)?
I spotted only dereferencing is automatic, like in Go.
Rephrase. You define your new type, let's say Student (with name, university, address). You write:
var student ...?
to make student hold actual data (of Student type/class)
to make student hold a pointer to the data
to make student hold a pointer to a pointer to the data
Or some from those points are impossible?
By default the model is of passing data by value. When you create a var of a specific type, the compiler will allocate on the stack the required space for the variable. Which is expected, as Nim compiles to C, and complex types are just structures. But like in C or C++, you can have pointers too. There is the ptr keyword to get an unsafe pointer, mostly for interfacing to C code, and there is a ref to get a garbage collected safe reference (both documented in the References and pointer types section of the Nim manual).
However, note that even when you specify a proc to pass a variable by value, the compiler is free to decide to pass it internally by reference if it considers it can speed execution and is safe at the same time. In practice the only time I've used references is when I was exporting Nim types to C and had to make sure both C and Nim pointed to the same memory. Remember that you can always check the generated C code in the nimcache directory. You will see then that a var parameter in a proc is just a pointer to its C structure.
Here is an example of a type with constructors to be created on the stack and passed in by value, and the corresponding pointer like version:
type
Person = object
age: int
name: string
proc initPerson(age: int, name: string): Person =
result.age = age
result.name = name
proc newPerson(age: int, name: string): ref Person =
new(result)
result.age = age
result.name = name
when isMainModule:
var
a = initPerson(3, "foo")
b = newPerson(4, "bar")
echo a.name & " " & $a.age
echo b.name & " " & $b.age
As you can see the code is essentially the same, but there are some differences:
The typical way to differentiate initialisation is to use init for value types, and new for reference types. Also, note that Nim's own standard library mistakes this convention, since some of the code predates it (eg. newStringOfCap does not return a reference to a string type).
Depending on what your constructors actually do, the ref version allows you to return a nil value, which you can treat as an error, while the value constructor forces you to raise an exception or change the constructor to use the var form mentioned below so you can return a bool indicating success. Failure tends to be treated in different ways.
In C-like languages theres is an explicit syntax to access either the memory value of a pointer or the memory value pointed by it (dereferencing). In Nim there is as well, and it is the empty subscript notation ([]). However, the compiler will attempt to automatically put those to avoid cluttering the code. Hence, the example doesn't use them. To prove this you can change the code to read:
echo b[].name & " " & $b[].age
Which will work and compile as expected. But the following change will yield a compiler error because you can't dereference a non reference type:
echo a[].name & " " & $a[].age
The current trend in the Nim community is to get rid of single letter prefixes to differentiate value vs reference types. In the old convention you would have a TPerson and an alias for the reference value as PPerson = ref TPerson. You can find a lot of code still using this convention.
Depending on what exactly your object and constructor need to do, instead of having a initPerson returning the value you could also have a init(x: var Person, ...). But the use of the implicit result variable allows the compiler to optimise this, so it is much more a taste preference or requirements of passing a bool to the caller.
It can be either.
type Student = object ...
is roughly equivalent to
typedef struct { ... } Student;
in C, while
type Student = ref object ...
or
type Student = ptr object ...
is roughly equivalent to
typedef struct { ... } *Student;
in C (with ref denoting a reference that is traced by the garbage collector, while ptr is not traced).
#include<stdio.h>
int main()
{
const int sum=100;
int *p=(int *)∑
*p=101;
printf("%d, %d",*p,sum);
return 0;
}
/*
output
101, 101
*/
p points to a constant integer variable, then why/how does *p manage to change the value of sum?
It's undefined behavior - it's a bug in the code. The fact that the code 'appears to work' is meaningless. The compiler is allowed to make it so your program crashes, or it's allowed to let the program do something nonsensical (such as change the value of something that's supposed to be const). Or do something else altogether. It's meaningless to 'reason' about the behavior, since there is no requirement on the behavior.
Note that if the code is compiled as C++ you'll get an error since C++ won't implicitly cast away const. Hopefully, even when compiled as C you'll get a warning.
p contains the memory address of the variable sum. The syntax *p means the actual value of sum.
When you say
*p=101
you're saying: go to the address p (which is the address where the variable sum is stored) and change the value there. So you're actually changing sum.
You can see const as a compile-time flag that tells the compiler "I shouldn't modify this variable, tell me if I do." It does not enforce anything on whether you can actually modify the variable or not.
And since you are modifying that variable through a non-const pointer, the compiler is indeed going to tell you:
main.c: In function 'main':
main.c:6:16: warning: initialization discards qualifiers from pointer target type
You broke your own promise, the compiler warns you but will let you proceed happily.
The behavior is undefined, which means that it may produce different outcomes on different compiler implementations, architecture, compiler/optimizer/linker options.
For the sake of analysis, here it is:
(Disclaimer: I don't know compilers. This is just a logical guess at how the compiler may choose to handle this situation, from a naive assembly-language debugger perspective.)
When a constant integer is declared, the compiler has the choice of making it addressable or non-addressable.
Addressable means that the integer value will actually occupy a memory location, such that:
The lifetime will be static.
The value might be hard-coded into the binary, or initialized during program startup.
It can be accessed with a pointer.
It can be accessed from any binary code that knows of its address.
It can be placed in either read-only or writable memory section.
For everyday CPUs the non-writeability is enforced by memory management unit (MMU). Messing the MMU is messy impossible from user-space, and it is not worth for a mere const integer value.
Therefore, it will be placed into writable memory section, for simplicity's sake.
If the compiler chooses to place it in non-writable memory, your program will crash (access violation) when it tries to write to the non-writable memory.
Setting aside microcontrollers - you would not have asked this question if you were working on microcontrollers.
Non-addressable means that it does not occupy a memory address. Instead, every code that references the variable (i.e. use the value of that integer) will receive a r-value, as if you did a find-and-replace to change every instance of sum into a literal 100.
In some cases, the compiler cannot make the integer non-addressable: if the compiler knows that you're taking the address of it, then surely the compiler knows that it has to put that value in memory. Your code belongs to this case.
Yet, with some aggressively-optimizing compiler, it is entirely possible to make it non-addressable: the variable could have been eliminated and the printf will be turned into int main() { printf("%s, %s", (b1? "100" : "101"), (b2? "100" : "101")); return 0; } where b1 and b2 will depend on the mood of the compiler.
The compiler will sometimes take a split decision - it might do one of those, or even something entirely different:
Allocate a memory location, but replace every reference with a constant literal. When this happens, a debugger will tell you the value is zero but any code that uses that location will appear to contain a hard-coded value.
Some compiler may be able to detect that the cast causes a undefined behavior and refuse to compile.
I thought these two methods were (memory allocation-wise) equivalent, however, I was seeing "out of scope" and "NSCFString" in the debugger if I used what I thought was the convenient method (commented out below) and when I switched to the more explicit method my code stopped crashing! Notice that I am getting the string that is being stored in my container from sqlite3 query.
p = (char*) sqlite3_column_text (queryStmt, 1);
// GUID = (NSString*) [NSString stringWithUTF8String: (p!=NULL) ? p : ""];
GUID = [[NSString alloc] initWithCString:(p!=NULL) ? p : "" encoding:NSUTF8StringEncoding];
Also note, that if I looked at the values in the debugger and printed them with NSLog they looked correct, however, I don't think new memory was allocated and the value copied. Instead the memory pointer was stored - went out of scope - referenced later - crash!
If you need to keep a reference to an object around after a method returns, then you need to take ownership of the object. So, if your variable GUID is an instance variable or some kind of global, you will need to take ownership of the object. If you use the alloc/init method, you have ownership of the object returned since you used alloc. You could just as easily use the stringWithUTF8String: method, but you will need to take ownership explicitly by sending a retain message. So, assuming GUID is some kind of non-method-scoped variable:
GUID = [[NSString stringWithUTF8String:"Some UTF-8 string"] copy];
(either copy or retain can be used here to take ownership, but copy is more common when dealing with strings).
Also, your code may be a little easier to read if you did something like:
GUID = p ? [[NSString stringWithUTF8String:p] copy] : #"";