I want to draw a sequence diagram for following example.
I know that I can use message line when there is a function call interaction for data exchange.
But in this case, read function interface is not defined since the target variable is defined as global for share to other components who want to read. I think all data flow between components has to be depicted during the design without considering whether it is via function interface or not. And i believe that it will give clear information about shared variable to other low level component designers.
Is there any way to draw directly shared variable in sequence diagram?
Following is my example explanation in code and what i want to depict is the variable_a which is used between A and B.
A.h
extern unsigned char variable_a;
A.c
unsigned char variable_a;
void func_A(void)
{
variable_a = input();
}
B.c
#include "A.h"
void func_B(void)
{
if(variable_a >= 100)
{
//do something
}
else
{
//do something
}
}
The global variable is an object an can be shown as a separate lifeline. Access to the object can be disclosed for example with get and set messages.
Remark: This technique can be seen as tedious or overkill, but it has the advantage of being accurate and visualising the coupling that would otherwise remain hidden. Btw, it also encourages good practice: the less global variables involved, the less additional lifelines ;-)
Additional hint: You may be interested also in this other question about how objects involved in an interaction are known.
Your code could be translated to this diagram:
The global variable_a is the assignment target of the reply message and the variable is also referenced in a guard of an alt-fragment. I think this covers most needs.
It is possible to model a lifeline for the string (or unsigned char). However, in my world a string doesn't have getters or setters. Maybe it could have an asReal():Real or asInteger():Integer operation. I doubt that it would be helpful to model that.
Related
let's say you have a function that set an index and then update few variables based on the value stored in the array element which the index is pointing to. Do you check the index to make sure it is in range? (In embedded system environment to be specific Arduino)
So far I have made a safe and unsafe version for all functions, is that a good idea? In some of my other codes I noticed that having only safe functions result in checking conditions multiple time as the libraries get larger, so I started to develop both. The safe function checks the condition and call the unsafe function as shown in example below for the case explained above.
Safe version:
bool RcChannelModule::setFactorIndexAndUpdateBoundaries(factorIndex_T factorIndex)
{
if(factorIndex < N_FACTORS)
{
setFactorIndexAndUpdateBoundariesUnsafe(factorIndex);
return true;
}
return false;
}
Unsafe version:
void RcChannelModule::setFactorIndexAndUpdateBoundariesUnsafe(factorIndex_T factorIndex)
{
setCuurentFactorIndexUnsafe(factorIndex);
updateOutputBoundaries();
}
If I am doing it wrong fundamentally please let me know why and how I could avoid that. Also I would like to know, generally when you program, do you consider the future user to be a fool or you expect them to follow the minimal documentation provided? (the reason I say minimal is because I do not have the time to write a proper documentation)
void RcChannelModule::setCuurentFactorIndexUnsafe(const factorIndex_T factorIndex)
{
currentFactorIndex_ = factorIndex;
}
Safety checks, such as array index range checks, null checks, and so on, are intended to catch programming errors. When these checks fail, there is no graceful recovery: the best the program can do is to log what happened, and restart.
Therefore, the only time when these checks become useful is during debugging and testing of your code. C++ provides built-in functionality for dealing with this through asserts, which are kept in the debug versions of the code, but compiled out from the release version:
void RcChannelModule::setFactorIndexAndUpdateBoundariesUnsafe(factorIndex_T factorIndex) {
assert(factorIndex < N_FACTORS);
setCuurentFactorIndexUnsafe(factorIndex);
updateOutputBoundaries();
}
Note: [When you make a library for external use] an argument-checking version of each external function perhaps makes sense, with non-argument-checking implementations of those and all internal-only functions. If you perform argument checking then do it (only) at the boundary between your library and the client code. But it's pointless to offer a choice to your users, for if you want to protect them from usage errors then you cannot rely on them to choose the "safe" versions of your functions. (John Bollinger)
Do you make safe and unsafe version of your functions or just stick to the safe version?
For higher level code, I recommend one version, a safe one.
High level code, with a large set of related functions and data, the combinations of interactions of data and code are not possible to fully check at development time. When an error is detected, the data should be set to indicate an error state. Subsequent use of data within these functions would be aware of the error state.
For lowest level -time critical routines, I'd go with #dasblinkenlight answer. Create one source code that compiles 2 ways per the debug and release compiles.
Yet keep in mind #pete becker, it this really likely a performance bottle neck to do a check?
With floating-point related routines, use the NaN to help keep track of an unrecoverable error.
Lastly, as able, create functions that do not fail and avoid the issue. With many, not all, this only requires small code additions. It often only adds a constant of time performance penalty and not a O(n) penalty.
Example: Consider a function to lop off the first character of a string - in place.
// This work fine as long as s[0] != 0
char *slop_1(char *s) {
size_t len = strlen(s); // most work is here
return memmove(s, s + 1, len); // and here
}
Instead define the function, and code it, to do nothing when s[0] == 0
char *slop_2(char *s) {
size_t len = strlen(s);
if (len > 0) { // negligible additional work
memmove(s, s + 1, len);
}
return s;
}
Similar code can be applied to OP's example. Note that it is "safe", at least within the function. The assert() scheme can still be used to discovery development issues. Yet the released code, without the assert(), still checks the range.
void RcChannelModule::setFactorIndexAndUpdateBoundaries(factorIndex_T factorIndex)
{
if(factorIndex < N_FACTORS) {
setFactorIndexAndUpdateBoundariesUnsafe(factorIndex);
} else {
assert(1);
}
}
Since you tagged this Arduino and embedded, you have a very resource-constrained system, one of the crappiest processors still manufactured.
On such a system you cannot afford extra error handling. It is better to properly document what values the parameters passed to the function must have, then leave the checking of this to the caller.
The caller can then either check this in run-time, if needed, or otherwise in compile-time with a static assert. Your function would however not be able to implement it as a static assert, as it can't know if factorIndex is a run-time variable or a compile-time constant.
As for "I have no time to write proper documentation", that's nonsense. It takes far less time to document this function than to post this SO question. You don't necessarily have to write an essay in some Word file. You don't necessarily have to use Doxygen or similar.
But you do need to write the bare minimum of documentation: In the header file, document the purpose and expected values of all function parameters in the form of comments. Preferably you should have a coding standard for how to document such functions. A minimal documentation of public API functions in the form of comments is part of your job as programmer. The code is not complete until this is written.
I watched tons of videos and spent a lot of time reading papers about models, how to work with them and general idea is quite clear. However, I still don't get several things that really slow me down.
I realize that model works only as an interface between view and data. However, when I look at sample codes, most of the time, some sort of data structure is sent to model and all functions in the model uses that internal model data structure to do required things: evaluate headers, row count etc. Example of such constructor (in this case internal model QList is addressBook):
AddressbookModel::AddressbookModel(const QString& addresses,
QObject *parent): QAbstractTableModel(parent)
{
QStringList records = addresses.split(’\n’);
QStringList line;
foreach(QString record, records)
addressBook.append(splitCSVLine(record));
}
And that looks OK, but it gets very confusing when I try to think about modifying underlying data some where else in the program, when some sort of model is "attached" to that data structure.
For example, lets have a look at this sample code from learning material:
// addressbook/main.cpp
#include <QtGui>
#include "addressbookmodel.h"
int main( int argc, char* argv[] )
{
QApplication app( argc, argv );
QFile file("addressbook.csv");
if ( !file.open(QIODevice::ReadOnly|QIODevice::Text) )
return 1;
QString addresses = QString::fromUtf8(file.readAll());
AddressbookModel model(addresses);
QTableView tableView;
tableView.setModel(&model);
tableView.show();
return app.exec();
}
Here, there is a static variable of addresses which is then sent to model. Now, user would be able to see and modify that data. But what if I want to work more with that data somewhere else in the program? What if I insert new entries to addresses? I realize that model will not see those changes, and in this example (and in many more) that underlying data structure is even sent not as a pointer.
So my question is: how to manage data properly, when I will have new data coming from "behind the scenes" - not only from the model? Should I work with data management only within the model class (implement required functions etc.)? Should I somehow pass only pointers of data to model? Everything gets even more tricky, when I think of using Proxy Models for filtering, because they also work and somewhat "treat" data in their own way. Maybe I missed something important about this architecture, but it really stops me here.
Working with Qts data models can be quite confusing. You will need to take care of most of the "updates" of you own. For example, if you change the models data in your overload of QAbstractItemModel::setData, you will have to emit QAbstractItemModel::dataChanged on your own. Same goes for inserting, removing or moving entries. If you have the time, you should read the link posted by SaZ, but for some quick information about what to emit in which overload, you can check the QAbstractItemModel Documentation.
Regarding the modifying of data "behind the scenes":
Best practice is to change the data over your model, i.e. call QAbstractItemModel::setData to change some data. But since this function is designed to get data in a "displayable format", your better of if you create your own functions. Inside of this functions you will need to "notify" the model of your changes. This way all Views will updater properly.
For example, if your "AddressRecord" has a name property:
void AddressbookModel::changeName(QModelIndex addressIndex, QString name) {
//For this example I assume you are using a simple list model with only one column
//The addressIndex´s column is always 0 in this case, and the parent invalid
addressBook[addressIndex.row()].setName(name);
emit dataChanged(addressIndex, addressIndex);
}
As you can see, you will have to somehow work with the QModelIndex-class, which represents the position of an entry inside your model.
I hope I could help at least a little bit, but Qts Model-View framework can by very tricky, especially if you have to add, remove, move or sort your data. But to get a deeper understanding of it, I'm afraid you will just have to try it out!
Isn't a unique_ptr essentially the same as a direct instance of the object? I mean, there are a few differences with dynamic inheritance, and performance, but is that all unique_ptr does?
Consider this code to see what I mean. Isn't this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
unique_ptr<int> a(new int);
print(*a);
return 0;
}
Almost exactly the same as this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
int a;
print(a);
return 0;
}
Or am I misunderstanding what unique_ptr should be used for?
In addition to cases mentioned by Chris Pitman, one more case you will want to use std::unique_ptr is if you instantiate sufficiently large objects, then it makes sense to do it in the heap, rather than on a stack. The stack size is not unlimited and sooner or later you might run into stack overflow. That is where std::unique_ptr would be useful.
The purpose of std::unique_ptr is to provide automatic and exception-safe deallocation of dynamically allocated memory (unlike a raw pointer that must be explicitly deleted in order to be freed and that is easy to inadvertently not get freed in the case of interleaved exceptions).
Your question, though, is more about the value of pointers in general than about std::unique_ptr specifically. For simple builtin types like int, there generally is very little reason to use a pointer rather than simply passing or storing the object by value. However, there are three cases where pointers are necessary or useful:
Representing a separate "not set" or "invalid" value.
Allowing modification.
Allowing for different polymorphic runtime types.
Invalid or not set
A pointer supports an additional nullptr value indicating that the pointer has not been set. For example, if you want to support all values of a given type (e.g. the entire range of integers) but also represent the notion that the user never input a value in the interface, that would be a case for using a std::unique_ptr<int>, because you could get whether the pointer is null or not as a way of indicating whether it was set (without having to throw away a valid value of integer just to use that specific value as an invalid, "sentinel" value denoting that it wasn't set).
Allowing modification
This can also be accomplished with references rather than pointers, but pointers are one way of doing this. If you use a regular value, then you are dealing with a copy of the original, and any modifications only affect that copy. If you use a pointer or a reference, you can make your modifications seen to the owner of the original instance. With a unique pointer, you can additionally be assured that no one else has a copy, so it is safe to modify without locking.
Polymorphic types
This can likewise be done with references, not just with pointers, but there are cases where due to semantics of ownership or allocation, you would want to use a pointer to do this... When it comes to user-defined types, it is possible to create a hierarchical "inheritance" relationship. If you want your code to operate on all variations of a given type, then you would need to use a pointer or reference to the base type. A common reason to use std::unique_ptr<> for something like this would be if the object is constructed through a factory where the class you are defining maintains ownership of the constructed object. For example:
class Airline {
public:
Airline(const AirplaneFactory& factory);
// ...
private:
// ...
void AddAirplaneToInventory();
// Can create many different type of airplanes, such as
// a Boeing747 or an Airbus320
const AirplaneFactory& airplane_factory_;
std::vector<std::unique_ptr<Airplane>> airplanes_;
};
// ...
void Airline::AddAirplaneToInventory() {
airplanes_.push_back(airplane_factory_.Create());
}
As you mentioned, virtual classes are one use case. Beyond that, here are two others:
Optional instances of objects. My class may delay instantiating an instance of the object. To do so, I need to use memory allocation but still want the benefits of RAII.
Integrating with C libraries or other libraries that love returning naked pointers. For example, OpenSSL returns pointers from many (poorly documented) methods, some of which you need to cleanup. Having a non-copyable pointer container is perfect for this case, since I can protect it as soon as it is returned.
A unique_ptr functions the same as a normal pointer except that you do not have to remember to free it (in fact it is simply a wrapper around a pointer). After you allocate the memory, you do not have to afterwards call delete on the pointer since the destructor on unique_ptr takes care of this for you.
Two things come to my mind:
You can use it as a generic exception-safe RAII wrapper. Any resource that has a "close" function can be wrapped with unique_ptr easily by using a custom deleter.
There are also times you might have to move a pointer around without knowing its lifetime explicitly. If the only constraint you know is uniqueness, then unique_ptr is an easy solution. You could almost always do manual memory management also in that case, but it is not automatically exception safe and you could forget to delete. Or the position you have to delete in your code could change. The unique_ptr solution could easily be more maintainable.
Let's say I have the following hierarchy for my project:
fragment/fragment.go
main.go
And in the fragment.go I have the following code, with one getter and no setter:
package fragment
type Fragment struct {
number int64 // private variable - lower case
}
func (f *Fragment) GetNumber() *int64 {
return &f.number
}
And in the main.go I create a Fragment and try to change Fragment.number without a setter:
package main
import (
"fmt"
"myproject/fragment"
)
func main() {
f := new(fragment.Fragment)
fmt.Println(*f.GetNumber()) // prints 0
//f.number = 8 // error - number is private
p := f.GetNumber()
*p = 4 // works. Now f.number is 4
fmt.Println(*f.GetNumber()) // prints 4
}
So by using the pointer, I changed the private variable outside of the fragment package. I understand that in for example C, pointers help to avoid copying large struct/arrays and they are supposed to enable you to change whatever they're pointing to. But I don't quite understand how they are supposed to work with private variables.
So my questions are:
Shouldn't the private variables stay private, no matter how they are accessed?
How is this compared to other languages such as C++/Java? Is it the case there too, that private variables can be changed using pointers outside of the class?
My Background: I know a bit C/C++, rather fluent in Python and new to Go. I learn programming as a hobby so don't know much about technical things happening behind the scenes.
You're not bypassing any access privilegies. If you acquire a *T from any imported package then you can always mutate *T, ie. the pointee at whole, as in an assignment. The imported package designer controls what you can get from the package, so the access control is not yours.
The restriction to what's said above is for structured types (structs), where the previous still holds, but the finer granularity of access control to a particular field is controlled by the field's name case even when referred to by a pointer to the whole structure. The field name must be uppercase to be visible outside its package.
Wrt C++: I believe you can achieve the same with one of the dozens C++ pointer types. Not sure which one, though.
Wrt Java: No, Java has no pointers. Not really comparable to pointers in Go (C, C++, ...).
I'm learning Go, and I'm a little confused about when to use pointers. Specifically, when returning a struct from a function, when is it appropriate to return the struct instance itself, and when is it appropriate to return a pointer to the struct?
Example code:
type Car struct {
make string
model string
}
func Whatever() {
var car Car
car := Car{"honda", "civic"}
// ...
return car
}
What are the situations where I would want to return a pointer, and where I would not want to? Is there a good rule of thumb?
There are two things you want to keep in mind, performance and API.
How is a Car used? Is it an object which has state? Is it a large struct? Unfortunately, it is impossible to answer when I have no idea what a Car is. Truthfully, the best way is to see what others do and copy them. Eventually, you get a feeling for this sort of thing. I will now describe three examples from the standard library and explain why I think they used what they did.
hash/crc32: The crc32.NewIEEE() function returns a pointer type (actually, an interface, but the underlying type is a pointer). An instance of a hash function has state. As you write information to a hash, it sums up the data so when you call the Sum() method, it will give you the state of that one instance.
time: The time.Date function returns a Time struct. Why? A time is a time. It has no state. It is like an integer where you can compare them, preform maths on them, etc. The API designer decided that a modification to a time would not change the current one but make a new one. As a user of the library, if I want the time one month from now, I would want a new time object, not to change the current one I have. A time is also only 3 words in length. In other words, it is small and there would be no performance gain in using a pointer.
math/big: big.NewInt() is an interesting one. We can pretty much agree that when you modify a big.Int, you will often want a new one. A big.Int has no internal state, so why is it a pointer? The answer is simply performance. The programmers realized that big ints are … big. Constantly allocating each time you do a mathematical operation may not be practical. So, they decided to use pointers and allow the programmer to decide when to allocate new space.
Have I answered your question? Probably not. It is a design decision and you need to figure it out on a case by case basis. I use the standard library as a guide when I am designing my own libraries. It really all comes down to judgement and how you expect client code to use your types.
Very losely, exceptions are likely to show up in specific circumstances:
Return a value when it is really small (no more than few words).
Return a pointer when the copying overhead would substantially hurt performance (size is a lot of words).
Often, when you want to mimic an object-oriented style, where you have an "object" that stores state and "methods" that can alter the object, then you would have a "constructor" function that returns a pointer to a struct (think of it as the "object reference" as in other OO languages). Mutator methods would have to be methods of the pointer-to-the-struct type instead of the struct type itself, in order to change the fields of the "object", so it's convenient to have a pointer to the struct instead of a struct value itself, so that all "methods" will be in its method set.
For example, to mimic something like this in Java:
class Car {
String make;
String model;
public Car(String myMake) { make = myMake; }
public setMake(String newMake) { make = newMake; }
}
You would often see something like this in Go:
type Car struct {
make string
model string
}
func NewCar(myMake string) *Car {
return &Car{myMake, ""}
}
func (self *Car) setMake(newMake string) {
self.make = newMake
}