How to interface my own LLVM backend with LLVM IR - llvm-ir

I have my own linker and machine code converter.I am using my own assembly instruction for my machine.This machine is a software processor which executes machine code generated by asm to hex converter. Instead of assembly, i wan to use c language now.My question is that how to use LLVM for this purpose.
One approach could be that:
Create one parser which will read .s file (sort of asm file) generated by LLVM IR and map those instruction with my processor specific asm instruction.
I donot want to create linker and asm to machine code converter again.
Is my approach ok? or what could be the better way to do that.

The *.s file you read is not just "sort of asm", it is actually assembler that has already passed some LLVM backend, probably some X86 variant if you have not chosen a different target.
What you really want to do is to make LLVM emit assembly instructions for your own machine instead. This is what Writing an LLVM Backend and similar guides are about.
This is not exactly simple, but I expect that trying to translate some other machine's instruction set (let alone X86) to your own is probably even more difficult, as you would have to emulate each and every detail of a very complex machine.

Related

How to exclude with GPRbuild some functions?

I have an executable in Ada compiled with gprbuild.the executable use some simple function (like sin e cos).This executable is operating in a App binded to a POS (Partition Operating System) designed with vxwork. Once recompiled the whole process it appears a bunch of multiple function declaration errors between the POS_API.o and the ada executable (hello.o) . These functions (sin, cos, ...) are all in the same library. Unfortunately the most easy solution to de-reference all this function in POS is not permitted (bound of design).Any suggestions on how to compile or proceed?
Is this any possibility to compile without a specific library or some function in order to avoid the multiple reference error?
I'm afraid this isn't really an answer: largely because it's more than ten years since I worked with VxWorks and Ada, and things have got a bit hazy. Also, it's a bit long for a comment on your question
As I used it, VxWorks comes with a whole suite of software that you configure to hold just the components you need into your kernel: in this case, that would presumably include the maths package, functions like sin(), as well as the OS functions like taskSpawn().
The Ada/VxWorks build process that we used generates a partially-linked object file, with references to sin(), taskSpawn() unresolved (I can't remember how this is achieved; if using GNU ld, maybe the -r or --relocatable switch?). When VxWorks loads this object file over the configured kernel, the unresolved references get resolved, and away we go.
Now, I don't know what sort of thing your POS_API does. Is it a skin over a configured VxWorks kernel? Does it load yoour Ada program itself? If it is itself a VxWorks program, how come it's exporting sin()?
I suspect that the problem is to do with the way you've linked your executable. Maybe you could show us your GPR file? Otherwise, I'm just whistling in the dark.

Disassemble an OpenCL kernel?

I'm not sure if it's possible. I want to study OpenCL in-depth, so I was wondering if there is a tool to disassemble an compiled OpenCL kernel.
For normal x86 executable, I can use objdump to get a disassembly view. Is there a similar tool for OpenCL kernel, yet?
If you're using NVIDIA's OpenCL implementation for their GPUs, you can do the followings to disassemble an OpenCL kernel:
Use clGetEventProfilingInfo() to dump the ptx code to a file, say ptxfile.ptx. Please refer to the OpenCL specification to have more details on this function.
Use nvcc to compile ptx to cubin file, for example: nvcc -cubin -arch=sm_20 ptxfile.ptx will compile ptxfile.ptx onto a compute capability 2.0 device.
Use cuobjdump to disassemble the cubin file into GPU instructions. For example: cuobjdump -sass ptxfile.cubin
Hope this helps.
I know that this is an old question, but in case someone comes looking here for disassembling a AMD GPU kernel, you can do the following in linux:
export GPU_DUMP_DEVICE_KERNEL=3
This make any kernel that is compiled on your machine dump the assembled code to a file in the same directory.
Source:
http://dis.unal.edu.co/~gjhernandezp/TOS/GPU/ATI_Stream_SDK_OpenCL_Programming_Guide.pdf
Sections 4.2.1 and 4.2.2
The simplest solution, in my experience, is to use clangs OpenCL C compiler and emit SPIR.
It even works on Godbolt's compiler explorer:
https://godbolt.org/z/_JbXPb
Clang can also emit ptx (https://godbolt.org/z/4ARMqM) and amdhsa (https://godbolt.org/z/TduTZQ), but it may not correspond to the ptx and amdhsa assembly generated by the respective driver at runtime.
If you work with an AMD GPU, you can use the Analyzer tool. It is free, cross-platform, and comes in two forms:
Command line tool (ships as part of the CodeXL package, search for the CodeXLAnalyzer executable after installing).
CodeXL GUI application (just switch to the Analyzer mode in CodeXL).
Here is a short summary of what you can do with the Analyzer:
Compile OpenCL kernels, OpenGL shaders and D3D shaders for any GPU that is supported by the installed driver (even without having the GPU physically installed on your system), and get the ISA. Using CodeXL Analyzer (option #2 above), you can get additional information such as an estimation for the number of clock cycles that are required to execute the instruction.
View the compiler-generated statistics (SGPRs usage, VGPRs usage, etc.)
Generate the AMD IL code for the OpenCL kernel.
Export the compiled binaries (ELF, in binary format).
You can download the CodeXL tool suite from here: https://gpuopen.com/compute-product/codexl/
As AMD CodeXLAnalyzer not not supported anymore use
Radeon GPU Analyzer

Using dlopen() on an executable

I need to call a function from another program. If the other program were a library, I could simply use dlopen and dlsym to get a handle to the function. Unfortunately, the other program is a Unix Executable, and building it as a library is not an option. Trying dlopen() on the executable gives this error message:
dlopen([...]/testprogram, 1): no suitable image found. Did find:
[...]/testprogram: can't map
This isn't surprising, as dlopen is meant for use with libraries, not executables. Is there any way to get dlopen and dlsym to work with executables? If not, is there an alternative way of achieving the same thing?
You can't open executables as libraries. The entry point of an executable will attempt to re-initialize the C library, and take over the brk pointer. This will corrupt your malloc heap. Additionally, the executable is likely to be mapped at a fixed address with no relocations, and if this address overlaps with anything already loaded, it's not possible to map it for that reason as well.
You need to refactor the other program into a library, or add a RPC interface to the other program.
Note that this does not necessarily apply for PIE executables. However, unless the executable is specifically designed for being dlopen()ed, this is unsafe, as main() will not be run, and any initialization done in main() therefore will not occur.
On some ELF systems (notably Linux), you can dlopen() PIE executables. When using GCC, just compile the executable with -fpie or -fPIE, and link it with -pie, and export the appropriate symbols using --dynamic-list or -rdynamic (explained in more detail in this other SO answer.
Tool here to do precisely that, handles ASLR/PIE and non-ASLR/PIE. Compiles on x86, ARM and MIPS (32 bit only). Edit the Makefile to set ARCH param.
http://rtfc.org.uk/cliapi.html
It's my tool but it seems to do what you want. Let me know if it doesn't work for you.
I appreciate how late I am to this party, but hey.
To add the ability to load executables via dlopen is registered as refused glibc RFE (Request For Enhancement). A detailed look at the RFE and a possible approach for some special cases may be found at
[http://sourceware.org/bugzilla/show_bug.cgi?id=11754][1]
Excluding PIEs there would be to many problems behind the scenes to implement such a functionality.

Platform specific code

Hope to get some pointers here. I am trying to get QT to compile with slightly different code for each platform. For example,
If platform is Windows then include windows.h
If platform is OSX then include time.h
AND
If platform is Windows use QueryPerformanceCounter function from windows.h
If platform is Linux use gettimeofdayfunction from time.h
The objective here is to write wrapper function to return elapsed microseconds that works with Windows (QueryPerformanceCounter) & Linux/Max (gettimeofday) without having 2 sets of code. Qtimer resolution is inadequate in Windows XP. (about 10-15ms increments).
Anyone can point me to a tutorial on how to do this ? Thank you in advance and Happy New Year to everyone here.
Gary Cho
If this was python, I'd say just create a module that conditionally imports one of the correct modules.
This being C++, I'm fairly certain this isn't possible (I'm not a C++ expert). Even if the compiled binaries were able to run on both windows and linux machines. I don't see any way to compile both windows and linux headers into an executable and then choose between them at runtime.
You're going to need to compile 2 binaries that each include the correct header.

How to get FreeRTOS on MSP430 using CCE?

I'd like to get FreeRTOS running on an MSP430 processor using Code Composer Essentials v3.1. I found an example of just this at http://www.westmorelandengineering.com/toc.htm. Specifically I’m working with FreeRTOS_Demo.zip, the top one. When I try to open it with CCE I get an error that the workspace "was not created by this version of Code Composer". So I tried to import the project and I get an error "The Managed Make project could not be read because of the following error: Project type com.ti.ccstudio.managedbuild.ui.programTargetID not found. Managed Make functionality will not be available for this project."
I’m wondering what my problem is and how I can get the project to build, or should I go about this a different way?
FreeRTOS support many, many, many chips and many, many, many compilers. Anything that is not standard C code is kept in a port layer.
The next FreeRTOS release (V7, out in the next couple of weeks and already available in the SVN repository) includes a CCS4 port and demo for the MSP430F5438 (MSP430X core).
Regards.
I was told that TI's CCS compiler suite (used in CCE/CCS) will not build the FreeRTOS sources because the FreeRTOS sources include stuff written in gnu assembler syntax (file extension .s is common between CCS asm and Gnu asm, but syntax is not the same). Until FreeRTOS is "ported" to the CCS compiler suite, your best bet is to use the full CCS with the GCC compiler instead of the CCS compiler.
reviving a zombie thread... not sure if CCE is even relevant now... you can get CCS 5.3 with code-size limited free MSP430 support.
I recently ported FreeRTOS to the CC430 using the new MP430Ware driver library from TI and Code Composer Studio 5.3, get it here:
http://www.freertos.org/Interactive_Frames/Open_Frames.html?http://interactive.freertos.org/entries/22894958-cc430f5137-ccs-5-3

Resources