(M68k) Why is my value not getting passed into D0? - recursion

So I am writing a program and subroutine where this is basically the pseudocode.
int findmin(int* vals, int count){
if(count == 1){
return vals[0];
}else{
int minrest = findmin(vals+1,count-1);
if (minrest < vals[0]){
return minrest
}else{
return vals[0]
}
}
}
I basically have to put this into m68k assembly code. In the pictures is what i have so far. I think my logic is correct but for some reason all this does is printout my header and nothing else, I feel like for some reason I am not corectly storing my result into D0 where it should be. Is there a step I am missing or something that I am just completely off on? My prog4.s is my main that calls the subroutine
My prog4.s is my main that calls the subroutine
subroutine recursive function

The main code calls the subroutine like this:
pea vals
move.w D1, -(SP)
jsr findmin
adda.l #6, SP
The findmin subroutine makes next recursive call:
move.l A0, -(SP)
move.w D1, -(SP)
jsr findmin
adda.l #6, SP
The findmin subroutine retrieves its parameters like:
findmin:
link A6, #0
...
move.w 8(A6), D1
move.l 12(A6), A0 <<< Here is the error!
But wait, because of how you placed things on the stack, this is how the stack is laid out:
A6 RTS D1 A0
lower ==== ==== == ==== higher
^
SP == A6
The above shows that the word D1 is at 8(A6) and that the longword A0 is at 10(A6). That's 10 instead of 12.

Related

Lex && Yacc compiler homework

Hello(my english isn't very well I hope you will understand) , I have a misson to make a compiler, already made the language in the lex and yacc but i'm pretty stuck, our teacher asked from us to build AST tree from the language and print it with pre-order. he gave us example for the text:
function void foo(int x, y, z; real f) {
if (x>y) {
x = x + f;
}
else {
y = x + y + z;
x = f * 2;
z = f;
}
}
the output of the AST tree with pre-order should be:
(CODE
(FUNCTION
foo
(ARGS
(INT x y z)
(REAL f)
)
(TYPE VOID)
(BODY
(IF-ELSE
(> x y)
(BLOCK
(= x
(+ x f)
)
)
(BLOCK
(= y
(+
(+ x y)
z
)
)
(
(= x
(* f 2)
)
(= z f)
)
)
)
)
my question is how should I build the tree? I mean which token will go to left which will go to right so I can get the same output ?
like
makeTree($1,$2,$3);
node,left,right
Help please :)
Stephen Johnson wrote a technical paper to accompany yacc about 42 years ago. Have you read it, and do you understand it?
If yes, a syntax rule like:
expr : expr '+' expr { $$ = node( '+', $1, $3 ); }
node is effectively creating an abstract syntax tree node; and each reduction performed by yacc is the opportunity to build this tree from the bottom up. That is the most important thing to know about yacc; it builds from the bottom up, and you need to construct your data structures likewise.
When the parse is complete ( for whatever version of complete your grammar yields ), the resultant value ($$) is the root of your syntax tree.
followup
You might want to devise a node data structure something like this:
typedef struct Node Node;
typedef struct NodeList NodeList;
struct NodeList {
int Num;
Node *List;
};
struct Node {
int Type;
union {
unsigned u; unsigned long ul; char *s, ... ;
Variable *var;
Node *Expression;
NodeList *List;
} Operands[3];
};
With this you could devise a node of type '+', which defined 2 Operands, corresponding to the Left and Right sides of the '+' opcode.
You could also have a node of type IF, which had three operands:
a conditional Expression
a List of Nodes to perform if the conditional was true
a List of Nodes to perform if the conditional was false.
You could have a node of type Func, which had three operands:
A Type of return value.
A List of arguments.
A List of Nodes comprising the body of the function
I would give more examples but formatting lists with this UI is as much fun as kicking a whale down a beach.

What pointer type to return to NASM from a ctypes callback?

I am calling a NASM 64 dll from ctypes; it includes a callback to the SciPy function integrate.dblquad. The callback receives a pointer to an array of two doubles and returns a pointer to an array of two doubles. The callback function executes correctly and shows the correct results in Python, but I am having trouble returning the pointer back to NASM.
I posted this question on May 12 as a ctypes question at What is the correct pointer type to return from a ctypes callback?. I received an answer based on C language with a solution I can't use in NASM. I hope someone can help with how this could work in NASM rather than C.
Here is the part of the ctypes code that calls the dll and executes the callback:
CA_data1 = (ctypes.c_double * len(data1))(*data1)
CA_data2 = (ctypes.c_double * len(data2))(*data2)
hDLL = ctypes.WinDLL("C:/NASM_Test_Projects/SciPy_Test/SciPy_Test.dll")
CallName = hDLL.Main_Entry_fn
CallName.argtypes = [ctypes.POINTER(ctypes.c_double),ctypes.POINTER(ctypes.c_double),ctypes.POINTER(ctypes.c_double),ctypes.POINTER(ctypes.c_longlong)]
CallName.restype = ctypes.c_double
ret_ptr = CallName(CA_data1,CA_data2,length_array_out,lib_call)
Here is the callback code:
from scipy.integrate import dblquad
import ctypes
def LibraryCall(ptr):
n = ctypes.cast(ptr,ctypes.POINTER(ctypes.c_double))
x = n[0]
y = n[1]
area = dblquad(lambda x, y: x*y, 0, 0.5, lambda x: 0, lambda x: 1-2*x)
return_val = area[0], area[1]
r_val = (ctypes.c_double * len(return_val))(*return_val)
rv = ctypes.cast(r_val,ctypes.POINTER(ctypes.c_double))
#All three of these return the same data:
return (r_val)
#return (rv)
#return (return_val)
lib_call = LibraryCB(LibraryCall)
lib_call = ctypes.cast(lib_call,ctypes.POINTER(ctypes.c_longlong))
I have also tried using these declarations, but there is no difference:
LibraryCB = ctypes.WINFUNCTYPE(None, ctypes.POINTER(ctypes.c_double))
LibraryCB = ctypes.PYFUNCTYPE(None,ctypes.POINTER(ctypes.c_double))
Here is the part of the NASM code that calls the callback and receives the pointer back from the callback in the variable [dblquad_Pointer]):
pop rbp
pop rdi
sub rsp,32
call [scipy.integrate_dblquad_Pointer]
add rsp,32
push rdi
push rbp
mov [dblquad_Pointer],rax
; check the values returned
lea rdi,[rel dblquad_Pointer]
mov rbp,qword [rdi] ; Return Pointer
movsd xmm0,qword[rbp]
ret
I have tried three separate calls to return the pointer to the dll:
return (r_val)
return (rv)
return (return_val)
All three of them return the same incorrect result from the dll back to ctypes.
The proposed solution when I posted this last was to change the DLL code to use an input/output parameter, with an example in C, but I know of no equivalent in NASM.
So to sum it up, my question is, if I have a callback from a NASM dll sending and receiving pointers, how do I handle the pointer returned back to the dll?
Mark Tolonen's answer above works if I do it like I show here.
In ctypes:
def LibraryCall(ptr):
x = ptr[0]
y = ptr[1]
area = dblquad(lambda x, y: x*y, 0, 0.5, lambda x: 0, lambda x: 1-2*x)
ptr[0] = area[0] #new
ptr[1] = area[1] #new
return (ptr)
The most important part is that on return to the dll I had done this to check the return values:
mov [dblquad_Pointer],rax
lea rdi,[rel dblquad_Pointer]
mov rbp,qword [rdi] ; Return Pointer
movsd xmm0,qword[rbp]
That's wrong. On entry to the callback, I had put the values into a local array callback_data_vals and passed that to the callback:
mov rdi,callback_data_vals
movsd [rdi],xmm0
movsd [rdi+8],xmm1
mov rcx,callback_data_vals
[make the call to the callback]
On return, the returned values are in the same place (callback_data_vals). Access them like this:
mov rdi,callback_data_vals
movsd xmm0,[rdi]
movsd xmm1,[rdi+8]
For a much longer array that won't be practical because the new values would have to be re-assigned in a loop in Python, which is slow. Here with only two values it's trivial to assign them to the return pointer.

Increment (++) a dereferenced pointer in a macro --> result is +2 instead of +1

I have the following code:
#include <stdio.h>
#define MIN(x, y) ((x) <= (y) ? (x) : (y))
int main ()
{
int x=5, y=0, least;
int *p;
p = &y;
least = MIN((*p)++, x);
printf("y=%d", y);
printf("\nleast=%d", least);
return 0;
}
I would expect the following result:
y=1
least=1
but instead y=2.
Can somebody explain why y is now 2 and not 1. I suppose that it is because some double incrementation, but I do not understand the mechanism behind it.
Thanks.
Preprocessor macros work by text substitution. So your line:
least = MIN((*p)++, x);
gets expanded to
least = (((*p)++) <= (x) ? ((*p)++) : (x));
The double-increment is clear.
It's because you are using a macro. Since you are passing the dereferenced pointer plus the increment through the macro, the macro then puts the dereferenced pointer and the increment operation in each place y shows up in your macro. Since y shows up twice in your macro, the increment operator happens two times.
If you do the increment before you call the macro y should only be 1.

pass by reference in assembly

I am trying to write a program to calculate the exponential of a number using ARM-C inter-working. I am using LPC1769(cortex m3) for debuuging. The following is the code:
/*here is the main.c file*/
#include<stdio.h>
#include<stdlib.h>
extern int Start (void);
extern int Exponentiatecore(int *m,int *n);
void print(int i);
int Exponentiate(int *m,int *n);
int main()
{
Start();
return 0;
}
int Exponentiate(int *m,int *n)
{
if (*n==0)
return 1;
else
{
int result;
result=Exponentiatecore(m,n);
return (result);
}
}
void print(int i)
{
printf("value=%d\n",i);
}
this is the assembly code which complements the above C code
.syntax unified
.cpu cortex-m3
.thumb
.align
.global Start
.global Exponentiatecore
.thumb
.thumb_func
Start:
mov r10,lr
ldr r0,=label1
ldr r1,=label2
bl Exponentiate
bl print
mov lr,r10
mov pc,lr
Exponentiatecore: // r0-&m, r1-&n
mov r9,lr
ldr r4,[r0]
ldr r2,[r1]
loop:
mul r4,r4
sub r2,#1
bne loop
mov r0,r4
mov lr,r9
mov pc,lr
label1:
.word 0x02
label2:
.word 0x03
however during the debug session, I encounter a Hardfault error for the execution of "Exponentiatecore(m,n)".
as seen in debug window.
Name : HardFault_Handler
Details:{void (void)} 0x21c <HardFault_Handler>
Default:{void (void)} 0x21c <HardFault_Handler>
Decimal:<error reading variable>
Hex:<error reading variable>
Binary:<error reading variable>
Octal:<error reading variable>
Am I making some stack corruption during alignment or is there a mistake in my interpretation?
please kindly help.
thankyou in advance
There are several problems with your code. The first is that you have an infinite loop because your SUB instruction is not setting the flags. Change it to SUBS. The next problem is that you're manipulating the LR register unnecessarily. You don't call other functions from Exponentiatecore, so don't touch LR. The last instruction of the function should be "BX LR" to return to the caller. Problem #3 is that your multiply instruction is wrong. Besides taking 3 parameters, if you multiplied the number by itself, it would grow too quickly. For example:
ExponentiateCore(10, 4);
Values through each loop:
R4 = 10, n = 4
R4 = 100, n = 3
R4 = 10000, n = 2
R4 = 100,000,000 n = 1
Problem #4 is that you're changing a non-volatile register (R4). Unless you save/restore them, you're only allowed to trash R0-R3. Try this instead:
Start:
stmfd sp!,{lr}
ldr r0,=label1
ldr r1,=label2
bl Exponentiatecore // no need to call C again
bl print
ldmfd sp!,{pc}
Exponentiatecore: // r0-&m, r1-&n
ldr r0,[r0]
mov r2,r0
ldr r1,[r1]
cmp r1,#0 // special case for exponent value of 0
moveq r0,#1
moveq pc,lr // early exit
loop:
mul r0,r0,r2 // multiply the original value by itself n times
subs r1,r1,#1
bne loop
bx lr
I just add
Start:
push {r4-r11,lr}
...
pop {r4-r11,pc}
Exponentiatecore: # r0-&m, r1-&n
push {r4-r11,lr}
...
pop {r4-r11,pc}
and clean bl print in Start and all work fine

How to fix clSetKernelArg error CL_INVALID_MEM_OBJECT in Haskell OpenCLRaw?

I trying to use OpenCL using HASKELL. I write a simple program converting a working one in C. It appears to work well, but when I assign a memory object to the kernel parameters, it fails with CL_INVALID_MEM_OBJECT error. I don't know who to fix because I use the same calls than in C program and there it works:
The programSource is the OpenCL code
programSource :: String
programSource = "__kernel void duparray(__global float *in, __global float *out ){ int id = get_global_id(0); out[id] = 2*in[id]; }"
And the initializacion works until the call of clSetKernelArg that fails with Just (ErrorCode (-38))
-- test openCL
input <- newArray ([0,1,2,3,4] :: [CFloat])
Right mem_in <- clCreateBuffer myContext (memFlagsJoin [clMemReadOnly,clMemCopyHostPtr]) (4*5) (castPtr input)
Right mem_out <- clCreateBuffer myContext clMemWriteOnly (4*5) nullPtr
print (mem_in,mem_out)
Right program <- clCreateProgramWithSource myContext programSource
print program
err <- clBuildProgram program [myDeviceId] "" buildProgramCallback nullPtr
print err
Right kernel <- clCreateKernel program "duparray"
print kernel
kaErr0 <- clSetKernelArg kernel 0 (fromIntegral.sizeOf $ mem_in) (castPtr mem_in)
kaErr1 <- clSetKernelArg kernel 1 (fromIntegral.sizeOf $ mem_out) (castPtr mem_out)
print (kaErr0,kaErr1)
I'm using OpenCLRaw, with several modifications that i put on https://github.com/zhensydow/OpenCLRaw
I found that I need to pass the direction of the mem buffer pointer, no the pointer itself. This is the rigth way to call clSetKernelArg:
dir_mem_in <- (malloc :: IO (Ptr Mem))
poke dir_mem_in mem_in
kaErr0 <- clSetKernelArg kernel 0 (fromIntegral.sizeOf $ mem_in) (castPtr dir_mem_in)
dir_mem_out <- (malloc :: IO (Ptr Mem))
poke dir_mem_out mem_out
kaErr1 <- clSetKernelArg kernel 1 (fromIntegral.sizeOf $ mem_out) (castPtr dir_mem_out)
print (kaErr0, kaErr1)

Resources