(CAPL) how i assign array length by using parameter - vector

void func(int a){
byte arr[a];
}
this code is not working. how I assign array length by using parameter?

In CAPL you have many options to go, but first you'll have to consider you probably want to step back and ask yourself if you really need variable array size at runtime. The measurement performance is what you should be concerned about, declaring a suitable array size as design may be a safer approach.
A global array of parametric size could be something like this:
variables
{
int arraySize = 256;
byte arr[arraySize];
}
From the docs,
Declaration of arrays (arrays, vectors, matrices) is permitted in CAPL. They are used and initialized in a manner analogous to C language.
In C, array size is constant:
Array is a type consisting of a contiguously allocated nonempty sequence of objects with a particular element type. The number of those objects (the array size) never changes during the array lifetime. [source]
This is why your code is not working: you cannot create an array of runtime-based size. Similarly, from the same source
Variable-length arrays
If expression is not an integer constant
expression, the declarator is for an array of variable size.
Each time the flow of control passes over the declaration, expression
is evaluated (and it must always evaluate to a value greater than
zero), and the array is allocated (correspondingly, lifetime of a VLA
ends when the declaration goes out of scope). The size of each VLA
instance does not change during its lifetime, but on another pass over
the same code, it may be allocated with a different size.
This is why you should be able to define a parametric array like I showed you. Even if in the code arraySize should change, arr will be of 256 elements for the execution of your CAPL script.
void func(int a){
byte arr[a];
}
Will throw error, because int a is determined to be of non-constant time, thus violating the requirements above. What you can do, is to memcpy parts of a larger array to a location of choice, for example a smaller array, or employ a number of "buffer" arrays as you often see in CAPL scripts.
As I took it home, the gist of it is: use a larger size array, and be precise about where you are putting your information inside of it. Note that you must be precise, because every element in the array contains some kind of data, at init most of it is non-sense, and there is no safeguard for you against this digital noise.

Related

Why use pointers and reference in codesys V3?

My question is: what are the benefits of using pointers and reference to?
I am new to codesys and in my previous job, I programmed in TIA portal (Siemens) and Sysmac Studio (Omron) and never came across pointers or something similar. I think I understand how they work but not sure when I should be using them myself.
For example, I just received a function block from a supplier:
Why don't they just have an array for input and output?
First of all, if you have ever used the VAR_IN_OUT declaration, then you have already used references, since that is equivalent to a VAR with REFERENCE TO.
As for the uses, there are mainly 4 that I can think of right now:
Type Punning, which you can also achieve using a UNION, but you may not want to have to create a union for every single reinterpretation cast in your code.
TL; DR: To save memory and copy execution time. Whenever you pass some data to a function/function block, it gets copied. This is not a big problem if your PLC has enough CPU power and memory, however if you are dealing with especially huge data on a low end PLC, then you may either exceed real time execution constraints, or run out of memory. When you pass a pointer/reference however, no matter how big the data is only the pointer/reference gets copied and passed, which is 4 bytes in 32 bit system, and 8 bytes in a 64 bit one.
In C style languages you'd use pointers/references when you want a function to return multiple values without the hassle of creating a custom structure every time. You can do the same here to, however in CODESYS function can have multiple outputs, for example:
VAR_OUPUT
out1 : INT; (*1st output variable *)
out2 : INT; (*2nd output variable *)
//...
END_VAR
And finally, as I mentioned at the very beginning, when you want to pass some data that needs to be modified in the function itself, in other words, where you can use VAR_IN_OUT you can also use pointers/references. One Special case where you will have to use a pointer is if you have a Function Block that receives some data in the FB_Init (initialization/construction) function and stores it locally. In such case you would have a pointer as a local variable in the function block, and take the address of the variable in the FB_Init function. Same applies if you have a structure that needs to reference another structure or some data.
PS. There are probably some other uses I missed. One of the main uses in other languages is for dynamic memory allocations, but in CODESYS this is disabled by default and not all PLCs support it, and hardly anyone (that I know) uses it.
EDIT: Though this has been accepted, I want to bring a real life example of us using pointers:
Suppose we want to have a Function Block that calculates the moving average on a given number series. A simple approach would be something like this:
FUNCTION_BLOCK MyMovingAvg
VAR_INPUT
nextNum: INT;
END_VAR
VAR_OUTPUT
avg: REAL;
END_VAR
VAR
window: ARRAY [0..50] OF INT;
currentIndex: UINT;
END_VAR
However, this has the problem that the moving window size is static and predefined. If we wanted to have averages for different window sizes we would either have to create several function blocks for different window sizes, or do something like this:
FUNCTION_BLOCK MyMovingAvg
VAR CONSTANT
maxWindowSize: UINT := 100;
END_VAR
VAR_INPUT
nextNum: INT;
windowSize: UINT (0..maxWindowSize);
END_VAR
VAR_OUTPUT
avg: REAL;
END_VAR
VAR
window: ARRAY [0..maxWindowSize] OF INT;
currentIndex: UINT;
END_VAR
where we would only use the elements of the array from 0 to windowSize and the rest would be ignored. This however also has the problems that we can't use window sizes more than maxWindowSize and there's potentially a lot of wasted memory if maxWindowSize is set high.
There are 2 ways to get a truly general solution:
Use dynamic allocations. However, as I mentioned previously, this isn't supported by all PLCs, is disabled by default, has drawbacks (you'll have to split you memory into two chunks), is hardly used and is not very CODESYS-like.
Let the user define the array of whatever size they want and pass the array to our function block:
FUNCTION_BLOCK MyMovingAvg
VAR_INPUT
nextNum: INT;
END_VAR
VAR_OUTPUT
avg: REAL;
END_VAR
VAR
windowPtr: POINTER TO INT;
windowSize: DINT;
currentIndex: UINT;
END_VAR
METHOD FB_Init: BOOL
VAR_INPUT
bInitRetains: BOOL;
bInCopyCode: BOOL;
END_VAR
VAR_IN_OUT // basically REFERENCE TO
window_buffer: ARRAY [*] OF INT; // array of any size
END_VAR
THIS^.windowPtr := ADR(window_buffer);
THIS^.windowSize := UPPER_BOUND(window_buffer, 1) - LOWER_BOUND(window_buffer, 1) + 1;
// usage:
PROGRAM Main
VAR
avgWindow: ARRAY [0..123] OF INT; // whatever size you want!
movAvg: MyMovingAvg(window_buffer := avgWindow);
END_VAR
movAvg(nextNum := 5);
movAvg.avg;
The same principle can be applied to any function block that operates on arrays (for example, we also use it for sorting). Moreover, similarly you may want to have a function that works on any integer/floating number. For that you may use one of the ANY types which is basically a structure that holds a pointer to the first byte of the data, the size of the data (in bytes) and a type enum.

Pointer vs value receiver in Go | heap.Interface vs sort.Interface

I came across priorityqueue example under heap.Interface package
Link: https://golang.org/pkg/container/heap/#Interface
For Push() and Pop() function required by heap.Interface, the implementation is on pointer receiver. But for Swap() function required by sort.Interface, the implementation is on value.
Why this difference ?
As per my understanding, Push() and Pop() are implemented on pointer type, as they need to change the underlying data. But going by that logic, Swap() should also be implemented on pointer type.
How and why does the Swap() implementation work on value, but Push() and Pop() do not ?
A pointer receiver is needed when the value passed needs to be modified. In the case of Swap, the value itself (which is a slice) doesn't get modified, although the array backing the slice does get modified.
In the case of Push and Pop, the slice does get modified since in both cases the length changes (and in the case of Push the underlying array may get replaced by a new one if it has reached its capacity).
Take a look at Push implementation:
func (pq *PriorityQueue) Push(x interface{}) {
n := len(*pq)
item := x.(*Item)
item.index = n
*pq = append(*pq, item) // Here, the slice is assigned a new value
}
Push (and Pop) modify the underlying slice as well as the slice elements for the priority queue, whereas Swap will only swap two elements in the slice, and will not change the slice itself. Thus, Swap can work with a value receiver.
Internally, a slice variable holds a length, a capacity, and a pointer to the data. Swapping items changes the data, but doesn't change any of the items in the slice header. Russ Cox explained this in a blog post.
Adding items to the slice, like to push something onto a heap, may require the array to be re-allocated, which will change the capacity and the location that needs to be pointed to.
You may find this answer on pointers vs. values generally to be useful. There are other types, like channels and maps, that contain references such that you don't need a pointer to mess with the data underneath.

Does a function parameter that accepts a string reference point directly to the string variable or the data on the heap in Rust

I've taken this picture and code from The Rust Book.
Why does s point to s1 rather than just the data on the heap itself?
If so this is how it works? How does the s point to s1. Is it allocated memory with a ptr field that contains the memory address of s1. Then, does s1, in turn point to the data.
In s1, I appear to be looking at a variable with a pointer, length, and capacity. Is only the ptr field the actual pointer here?
This is my first systems level language, so I don't think comparisons to C/C++ will help me grok this. I think part of the problem is that I don't quite understand what exactly pointers are and how the OS allocates/deallocates memory.
fn main() {
let s1 = String::from("hello");
let len = calculate_length(&s1);
println!("The length of '{}' is {}.", s1, len);
}
fn calculate_length(s: &String) -> usize {
s.len()
}
The memory is just a huge array, which can be indexed by any offset (e.g. u64).
This offset is called address,
and a variable that stores an address called a pointer.
However, usually only some small part of memory is allocated, so not every address is meaningful (or valid).
Allocation is a request to make a (sequential) range of addresses meaningful to the program (so it can access/modify).
Every object (and by object I mean any type) is located in allocated memory (because non-allocated memory is meaningless to the program).
Reference is actually a pointer that is guaranteed (by a compiler) to be valid (i.e. derived from address of some object known to a compiler). Take a look at std doc also.
Here an example of these concepts (playground):
// This is, in real program, implicitly defined,
// but for the sake of example made explicit.
// If you want to play around with the example,
// don't forget to replace `usize::max_value()`
// with a smaller value.
let memory = [uninitialized::<u8>(); usize::max_value()];
// Every value of `usize` type is valid address.
const SOME_ADDR: usize = 1234usize;
// Any address can be safely binded to a pointer,
// which *may* point to both valid and invalid memory.
let ptr: *const u8 = transmute(SOME_ADDR);
// You find an offset in our memory knowing an address
let other_ptr: *const u8 = memory.as_ptr().add(SOME_ADDR);
// Oversimplified allocation, in real-life OS gives a block of memory.
unsafe { *other_ptr = 15; }
// Now it's *meaningful* (i.e. there's no undefined behavior) to make a reference.
let refr: &u8 = unsafe { &*other_ptr };
I hope that clarify most things out, but let's cover the questions explicitly though.
Why does s point to s1 rather than just the data on the heap itself?
s is a reference (i.e. valid pointer), so it points to the address of s1. It might (and probably would) be optimized by a compiler for being the same piece of memory as s1, logically it still remains a different object that points to s1.
How does the s point to s1. Is it allocated memory with a ptr field that contains the memory address of s1.
The chain of "pointing" still persists, so calling s.len() internally converted to s.deref().len, and accessing some byte of the string array converted to s.deref().ptr.add(index).deref().
There are 3 blocks of memory that are displayed on the picture: &s, &s1, s1.ptr are different (unless optimized) memory addresses. And all of them are stored in the allocated memory. The first two are actually stored at pre-allocated (i.e. before calling main function) memory called stack and usually it is not called an allocated memory (the practice I ignored in this answer though). The s1.ptr pointer, in contrast, points to the memory that was allocated explicitly by a user program (i.e. after entering main).
In s1, I appear to be looking at a variable with a pointer, length, and capacity. Is only the ptr field the actual pointer here?
Yes, exactly. Length and capacity are just common unsigned integers.

Memory leak in golang slice

I just started learning go, while going through slice tricks, couple of points are very confusing. can any one help me to clarify.
To cut elements in slice its given
Approach 1:
a = append(a[:i], a[j:]...)
but there is a note given that it may cause to memory leaks if pointers are used and recommended way is
Approach 2:
copy(a[i:], a[j:])
for k, n := len(a)-j+i, len(a); k < n; k++ {
a[k] = nil // or the zero value of T
}
a = a[:len(a)-j+i]
Can any one help me understand how memory leaks happen.
I understood sub slice will be backed by the main array. My thought is irrespective of pointer or not we have to follow approach 2 always.
update after #icza and #Volker answer..
Lets say you have a struct
type Books struct {
title string
author string
}
var Book1 Books
var Book2 Books
/* book 1 specification */
Book1.title = "Go Programming"
Book1.author = "Mahesh Kumar"
Book2.title = "Go Programming"
Book2.author = "Mahesh Kumar"
var bkSlice = []Books{Book1, Book2}
var bkprtSlice = []*Books{&Book1, &Book2}
now doing
bkSlice = bkSlice[:1]
bkSlice still holds the Book2 in backing array which is still in memory and is not required to be.
so do we need to do
bkSlice[1] = Books{}
so that it will be GCed. I understood pointers have to be nil-ed as the slice will hold unnecessary references to the objects outside backing array.
Simplest can be demonstrated by a simple slice expression.
Let's start with a slice of *int pointers:
s := []*int{new(int), new(int)}
This slice has a backing array with a length of 2, and it contains 2 non-nil pointers, pointing to allocated integers (outside of the backing array).
Now if we reslice this slice:
s = s[:1]
Length will become 1. The backing array (holding 2 pointers) is not touched, it sill holds 2 valid pointers. Even though we don't use the 2nd pointer now, since it is in memory (it is the backing array), the pointed object (which is a memory space for storing an int value) cannot be freed by the garbage collector.
The same thing happens if you "cut" multiple elements from the middle. If the original slice (and its backing array) was filled with non-nil pointers, and if you don't zero them (with nil), they will be kept in memory.
Why isn't this an issue with non-pointers?
Actually, this is an issue with all pointer and "header" types (like slices and strings), not just pointers.
If you would have a slice of type []int instead of []*int, then slicing it will just "hide" elements that are of int type which must stay in memory as part of the backing array regardless of if there's a slice that contains it or not. The elements are not references to objects stored outside of the array, while pointers refer to objects being outside of the array.
If the slice contains pointers and you nil them before the slicing operation, if there are no other references to the pointed objects (if the array was the only one holding the pointers), they can be freed, they will not be kept due to still having a slice (and thus the backing array).
Update:
When you have a slice of structs:
var bkSlice = []Books{Book1, Book2}
If you slice it like:
bkSlice = bkSlice[:1]
Book2 will become unreachabe via bkSlice, but still will be in memory (as part of the backing array).
You can't nil it because nil is not a valid value for structs. You can however assign its zero value to it like this:
bkSlice[1] = Book{}
bkSlice = bkSlice[:1]
Note that a Books struct value will still be in memory, being the second element of the backing array, but that struct will be a zero value, and thus will not hold string references, thus the original book author and title strings can be garbage collected (if no one else references them; more precisely the byte slice referred from the string header).
The general rule is "recursive": You only need to zero elements that refer to memory located outside of the backing array. So if you have a slice of structs that only have e.g. int fields, you do not need to zero it, in fact it's just unnecessary extra work. If the struct has fields that are pointers, or slices, or e.g. other struct type that have pointers or slices etc., then you should zero it in order to remove the reference to the memory outside of the backing array.

Why are the keys and values of a borrowed HashMap accessed by reference, not value?

I have a function that takes a borrowed HashMap and I need to access values by keys. Why are the keys and values taken by reference, and not by value?
My simplified code:
fn print_found_so(ids: &Vec<i32>, file_ids: &HashMap<u16, String>) {
for pos in ids {
let whatever: u16 = *pos as u16;
let last_string: &String = file_ids.get(&whatever).unwrap();
println!("found: {:?}", last_string);
}
}
Why do I have to specify the key as a reference, i.e., file_ids.get(&whatever).unwrap() instead of file_ids.get(whatever).unwrap()?
As I understand it, the last_string has to be of type &String, meaning a borrowed string, because the owning collection is borrowed. Is that right?
Similar to the above point, am I correct in assuming pos is of type &u16 because it takes borrowed values from ids?
Think about the semantics of passing parameters as references or as values:
As reference: no ownership transfer. The called function merely borrows the parameter.
As value: the called function takes ownership of the parameter and may not be used by the caller anymore.
Since the function HashMap::get does not need ownership of the key to find an element, the less restrictive passing method was chosen: by reference.
Also, it does not return the value of the element, only a reference. If it returned the value, the value inside the HashMap would no longer be owned by the HashMap and thus be inaccessible in the future.
TL;DR: Rust is not Java.
Rust may have high-level constructs, and data-structures, but it is at heart a low-level language, as illustrated by one of its guiding principle: You don't pay for what you don't use.
As a result, the language and its libraries will as much as possible attempt to eliminate any cost that is superfluous, such as allocating memory needlessly.
Case 1: Taking the key by value.
If the key is a String, this means allocating (and deallocating) memory for each and every look-up, when you could use a local buffer that is only allocated once and for all.
Case 2: Returning by value.
Returning by value means that either:
you remove the entry from the container to give it to the user
you copy the entry in the container to give it to the user
The latter is obviously inefficient (copy means allocation), the former means that if the user wants the value back in another insertion has to take place again, which means look-up etc... and is also inefficient.
In short, returning by value is inefficient in this case.
Rust, therefore, takes the most logical choice as far as efficiency is concerned and passes and returns by value whenever practical.
While it seems unhelpful when the key is a u16, think about how it would work with a more complex key such as a String.
In that case taking the key by value would often mean having to allocate and initialise a new String for each lookup, which would be expensive.

Resources