In C#, more specifically in Unity, there is a a method called Mathf.PingPong. What is the equivalent for Arduino?
I haven't used Unity, but if I understand the definition of that function correctly, you can use the modulo operator.
int pingpong(int t, int length)
{
return t % length;
}
You can probably use fmod, if you need floating point numbers.
Edit: I assume you mean in C when you are talking about arduino.
Related
This question already has an answer here:
I cannot understand the CUDA documentation in order to use math.h functions in CUDA kernels
(1 answer)
Closed 3 years ago.
I'm writing a memory-heavy CUDA computation program. I need to use mathematical functions, like the ones in math.h within my kernel. So I did some research and stumbled upon "cuda_fp16.h", which is supposed to add a lot of mathematical functions to use on the device. However, if I want to use one of those math functions (e.g. cos(i) which is part of this library), upon compilation, it tells me that I cannot run a __host__ function on the device. Its clear to me that this is impossible, but the cuda_fp16.h library should exactly add __device__ functions for math. Within the "cuda_fp16.h", there are errors saying that the type __half is not defined.
I have looked at the definition of the cos() that I was using, and it leads me to something within math.h. So my guess is that it just takes the function from there instead of cuda_fp16.h
#include "cuda.h"
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include "cuda_fp16.h"
__global__ void computation(double x, double y) //function that should upon being called compute the cosine of y.
{
x = cos(y);
}
This is a very simple example of what I am trying to do; just to get the kernel to compute some kind of mathematical function of a value.
I expect the whole thing to be able to compile, since I included the library that would allow such a function to be computed by a __device__ function. However it does not compile, and tells me that I can not call the __host__ function cos on the device.
I have found the problem. In the code itself, I had an int instead of a double as an argument for the function. If the argument for cos() is an int, then it uses the <math.h> version of the function instead of the CUDA one. The CUDA one gets called with a float and double. So the code I posted as an example is how it actually should work, I just hadn't realised that I had given an integer as an argument instead of the actual wanted double.
In Pascal, I understand that one could create a function returning a pointer which can be dereferenced and then assign a value to that, such as in the following (obnoxiously useless) example:
type ptr = ^integer;
var d: integer;
function f(x: integer): ptr;
begin
f := #x;
end;
begin
f(d)^ := 4;
end.
And now d is 4.
(The actual usage is to access part of a quite complicated array of records data structure. I know that a class would be better than an array of nested records, but it isn't my code (it's TeX: The Program) and was written before Pascal implementations supported object-orientation. The code was written using essentially a language built on top of Pascal that added macros which expand before the compiler sees them. Thus you could define some macro m that takes an argument x and expands into thearray[x + 1].f1.f2 instead of writing that every time; the usage would be m(x) := somevalue. I want to replicate this functionality with a function instead of a macro.)
However, is it possible to achieve this functionality without the ^ operator? Can a function f be written such that f(x) := y (no caret) assigns the value y to x? I know that this is stupid and the answer is probably no, but I just (a) don't really like the look of it and (b) am trying to mimic exactly the form of the macro I mentioned above.
References are not first class objects in Pascal, unlike languages such as C++ or D. So the simple answer is that you cannot directly achieve what you want.
Using a pointer as you illustrated is one way to achieve the same effect although in real code you'd need to return the address of an object whose lifetime extends beyond that of the function. In your code that is not the case because the argument x is only valid until the function returns.
You could use an enhanced record with operator overloading to encapsulate the pointer, and so encapsulate the pointer dereferencing code. That may be a good option, but it very much depends on your overall problem, of which we do not have sight.
I am trying to understand how pointers work in Go. On a general level, I have little experience with pointers as I mostly use javascript.
I wrote this dummy program:
func swap(a, b *int) {
fmt.Println("3", &a, &b)
*a, *b = *b, *a
fmt.Println("4", a, b)
}
func main() {
x := 15
y := 2
fmt.Println("1", x, y)
fmt.Println("2", &x, &y)
swap(&x, &y)
fmt.Println("5", x, y)
}
Which prints the following results:
$ go run test.go
1 15 2
2 0x208178170 0x208178178
3 0x2081ac020 0x2081ac028
4 0x208178178 0x208178170
5 2 15
I have several questions:
From what I understand &x gives the address at which x is stored. To get the actual value of x, I need to use *x. Then, I don't understand why &x is of type *int. As *x and &x are both of type *int, their differences are not clear to me at all.
In my swap function, what is the difference between using *a, *b = *b, *a and using a, b = b, a? Both work but I can't explain why...
Why are the addresses different when printed between step 2 and 3?
Why can't I just modify the address directly assigning &b to &a for example?
Many thanks for your help
1) It's a confusion about the language that leads you to think *x and &x are the same type. As Not_a_Golfer pointed out, the uses of * in expressions and type names are different. *x in your main() is invalid syntax, because * in an expression tries to get the value pointed to by the pointer that follows, but x is not a pointer (it's an int).
I think you were thinking of the fact that when you take a pointer to x using &x, the character you add to the type name is * to form *int. I can see how it's confusing that &var gets you *typ rather than &typ. On the other hand, if they'd put & on the type name instead, that would be confusing in other situations. Some trickiness is inevitable, and like human languages, it may be easier to learn by using than discussion alone.
2) Again, turns out the assumption is inaccurate: a, b = b, a swaps the pointers that the swap function looks at, but doesn't swap the values from main's perspective and the last line of output changes to 5 15 2: http://play.golang.org/p/rCXDgkZ9kG
3) swap is printing the address of the pointer variables, not of the underlying integers. You'd print a and b to see the addresses of the integers.
4) I'm going to assume, maybe wrongly, that you were hoping you could swap the locations that arbitrary variables point to with & syntax, as in &x, &y = &y, &x, without ever declaring pointer variables. There's some ambiguity, and if that's not what you were going for, I'm not sure if this part of the answer will help you.
As with many "why can't I..." questions the easy out is "because they defined the language that way". But going to why it's that way a little bit, I think you're required to declare a variable as a pointer at some point (or another type implemented using pointers, like maps or slices) because pointers come with booby traps: you can do something over in one piece of code that changes other code's local variables in unexpected ways, for example. So wherever you see *int appear, it's telling you you might have to worry about (or, that you're able to use) things like nil pointers, concurrent access from multiple pieces of code, etc.
Go is a bit more conservative about making pointer-hood explicit than other languages are: C++, for example, has the concept of "reference parameters" (int& i) where you could do your swap(x,y) with no & or pointers appearing in main. In other words, in languages with reference parameters, you might have to look at the declaration of a function to know whether it will change its arguments. That sort of behavior was a little too surprising/implicit/tricky for the Go folks to adopt.
No getting around that all the referencing and dereferencing takes some thinking, and you might just have to work with it a while to get it; hope all this helps, though.
From what I understand &x gives the address at which x is stored. To get the actual value of x, I need to use *x. Then, I don't understand why &x is of type *int. As *x and &x are both of type *int, their differences are not clear to me at all.
When *int is in function declaration, it mean pointer of int. but it's in the statement, *x is not type *int. it mean dereference of the pointer. So:
*a, *b = *b, *a
This mean, swapping values by dereference.
The fuction arguments is passing by copyed. So if you want to export values to caller, you need to pass a pointer of the variable as argument.
In my swap function, what is the difference between using *a, *b = *b, *a and using a, b = b, a? Both work but I can't explain why...
As I said, a, b = b, a mean swapping pointers not swapping values.
Why are the addresses different when printed between step 2 and 3?
This is not a defined result. Address of variable are disposed by go's runtime.
Why can't I just modify the address directly assigning &b to &a for example?
Not impossible. For example:
package main
import (
"unsafe"
)
func main() {
arr := []int{2, 3}
pi := &arr[0] // take address of first element of arr
println(*pi) // print 2
// Add 4bytes to the pointer
pi = (*int)(unsafe.Pointer(uintptr(unsafe.Pointer(pi)) + unsafe.Sizeof(int(0))))
println(*pi) // print 3
}
Using unsafe package, you can update address values. But it's not endorsed.
I am wondering how can I find the range between two elements in QVector using c++.
When using C# it's easier, and looks like following:
QVector aaa;
aaa.getRange(item1, item2);
Your question is not very clear. By googling what .NET's getRange actually does, it seems to return given count of elements from given starting position. QVector<T> QVector::mid(int pos, int length = -1); does the same with QVector.
Hey there,
I have a mathematical function (multidimensional which means that there's an index which I pass to the C++-function on which single mathematical function I want to return. E.g. let's say I have a mathematical function like that:
f = Vector(x^2*y^2 / y^2 / x^2*z^2)
I would implement it like that:
double myFunc(int function_index)
{
switch(function_index)
{
case 1:
return PNT[0]*PNT[0]*PNT[1]*PNT[1];
case 2:
return PNT[1]*PNT[1];
case 3:
return PNT[2]*PNT[2]*PNT[1]*PNT[1];
}
}
whereas PNT is defined globally like that: double PNT[ NUM_COORDINATES ]. Now I want to implement the derivatives of each function for each coordinate thus generating the derivative matrix (columns = coordinates; rows = single functions). I wrote my kernel already which works so far and which call's myFunc().
The Problem is: For calculating the derivative of the mathematical sub-function i concerning coordinate j, I would use in sequential mode (on CPUs e.g.) the following code (whereas this is simplified because usually you would decrease h until you reach a certain precision of your derivative):
f0 = myFunc(i);
PNT[ j ] += h;
derivative = (myFunc(j)-f0)/h;
PNT[ j ] -= h;
now as I want to do this on the GPU in parallel, the problem is coming up: What to do with PNT? As I have to increase certain coordinates by h, calculate the value and than decrease it again, there's a problem coming up: How to do it without 'disturbing' the other threads? I can't modify PNT because other threads need the 'original' point to modify their own coordinate.
The second idea I had was to save one modified point for each thread but I discarded this idea quite fast because when using some thousand threads in parallel, this is a quite bad and probably slow (perhaps not realizable at all because of memory limits) idea.
'FINAL' SOLUTION
So how I do it currently is the following, which adds the value 'add' on runtime (without storing it somewhere) via preprocessor macro to the coordinate identified by coord_index.
#define X(n) ((coordinate_index == n) ? (PNT[n]+add) : PNT[n])
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
//*// Example: f[i] = x[i]^3
return (X(function_index)*X(function_index)*X(function_index));
// */
}
That works quite nicely and fast. When using a derivative matrix with 10000 functions and 10000 coordinates, it just takes like 0.5seks. PNT is defined either globally or as constant memory like __constant__ double PNT[ NUM_COORDINATES ];, depending on the preprocessor variable USE_CONST.
The line return (X(function_index)*X(function_index)*X(function_index)); is just an example where every sub-function looks the same scheme, mathematically spoken:
f = Vector(x0^3 / x1^3 / ... / xN^3)
NOW THE BIG PROBLEM ARISES:
myFunc is a mathematical function which the user should be able to implement as he likes to. E.g. he could also implement the following mathematical function:
f = Vector(x0^2*x1^2*...*xN^2 / x0^2*x1^2*...*xN^2 / ... / x0^2*x1^2*...*xN^2)
thus every function looking the same. You as a programmer should only code once and not depending on the implemented mathematical function. So when the above function is being implemented in C++, it looks like the following:
__device__ double myFunc(int function_index, int coordinate_index, double add)
{
double ret = 1.0;
for(int i = 0; i < NUM_COORDINATES; i++)
ret *= X(i)*X(i);
return ret;
}
And now the memory accesses are very 'weird' and bad for performance issues because each thread needs access to each element of PNT twice. Surely, in such a case where each function looks the same, I could rewrite the complete algorithm which surrounds the calls to myFunc, but as I stated already: I don't want to code depending on the user-implemented function myFunc...
Could anybody come up with an idea how to solve this problem??
Thanks!
Rewinding back to the beginning and starting with a clean sheet, it seems you want to be able to do two things
compute an arbitrary scalar valued
function over an input array
approximate the partial derivative of an arbitrary scalar
valued function over the input array
using first order accurate finite differencing
While the function is scalar valued and arbitrary, it seems that there are, in fact, two clear forms which this function can take:
A scalar valued function with scalar arguments
A scalar valued function with vector arguments
You appeared to have started with the first type of function and have put together code to deal with computing both the function and the approximate derivative, and are now wrestling with the problem of how to deal with the second case using the same code.
If this is a reasonable summary of the problem, then please indicate so in a comment and I will continue to expand it with some code samples and concepts. If it isn't, I will delete it in a few days.
In comments, I have been trying to suggest that conflating the first type of function with the second is not a good approach. The requirements for correctness in parallel execution, and the best way of extracting parallelism and performance on the GPU are very different. You would be better served by treating both types of functions separately in two different code frameworks with different usage models. When a given mathematical expression needs to be implemented, the "user" should make a basic classification as to whether that expression is like the model of the first type of function, or the second. The act of classification is what drives algorithmic selection in your code. This type of "classification by algorithm" is almost universal in well designed libraries - you can find it in C++ template libraries like Boost and the STL, and you can find it in legacy Fortran codes like the BLAS.