Network/Host Byte Order Functions for multiple platforms - networking

I am working on a project wehre current ntohl, ntohs, htonl and htons are defined as macros in a standard header file that most files include. This causes problems on platforms where these symbols are already defined, for example, winsock2.h declares functions with the same names mentioned above, and causes compile errors, as these declarations get expanded to my macro definition. On Mac OS, I get thousands of compiler warning, as Mac OS defines these macros for you already.
I want to support Windows, Mac OS and Linux, and use the standard OS functions or macros wherever possible, and if they are not decalred, then use my own definition. What is the best way to do this?
I have tried:
#if defined WIN32
#include <winsock2.h>
#endif
but this causes compile errors as there are lots of function name clashes with my current, large codebase.

This is hardly a mystery:
#ifndef ntohl
#define ntohl(x) ...
#endif
Then you just have to make sure that all language and system #includes occur before your own, which is standard practice anyway.

Related

Reason to use Qt standard library function wrappers

Is there any reason to use Qt standard function wrappers like qstrncpy instead of strncpy?
I could not find any hint in documentation. And I'm curious if there is any functional difference. It looks like making code dependent on Qt, even in not mandatory places.
I found this: Qt wrapper for C libraries
But it doesn't answer my question.
These methods are part of Qt's efforts for platform-independence. Qt tries to hide platform differences and use the best each platform has to offer, replicating that functionality on platforms where it is not available. Here is what the documentation of qstrncpy has to say:
A safe strncpy() function.
Copies at most len bytes from src (stopping at len or the terminating '\0' whichever comes first) into dst and returns a pointer to dst. Guarantees that dst is '\0'-terminated. If src or dst is nullptr, returns nullptr immediately.
[…]
Note: When compiling with Visual C++ compiler version 14.00 (Visual C++ 2005) or later, internally the function strncpy_s will be used.
So qstrncpy is safer than strncpy.
The Qt wrappers for these functions are safer than the standard ones because they guarantee the destination string will always be null-terminated. strncpy() does not guarantee this.
In C11, strncpy_s() and other _s() suffixed functions were added as safe string functions. However, they are not available in any C++ standard, they are C-only. The Qt wrappers fix this.

What does Qt Quick Compiler do exactly?

What does Qt Quick Compiler do exactly? My understanding was that it "compiles" QML/JS into C++ and integrates this into the final binary/executable. So, there is no JIT compilation or any other JS-related things during runtime.
However, I saw somewhere an article that claimed that it's not like this and actually it only "bundles" QML/JS into final binary/executable, but there is still some QML/JS-related overhead during runtime.
At the documentation page there is this explanation:
.qml files as well as accompanying .js files can be translated into
intermediate C++ source code. After compilation with a traditional
compiler, the code is linked into the application binary.
What is this "intermediate C++ source code"? Why not just "C++ source code"? That confuses me, but the last statement kinda promises that yes, it is a C++ code, and after compiling it with C++ compiler you will have a binary/executable without any additional compiling/interpretation during runtime.
Is it how it actually is?
The code is of an intermediate nature because it doesn't map Javascript directly to C++. E.g. var i = 1, j = 2, k = i+j is not translated to the C++ equivalent double i = 1., j = 2., k = i+j. Instead, the code is translated to a series of operations that directly manipulate the state of the JS virtual machine. JS semantics are not something you can get for free from C++: there will be runtime costs no matter how you implement it. There is no additional compiling nor interpretation, but the virtual machine that implements the JS state still has to exist.
That's not an overhead easy to get rid of without emitting a lot mostly dead code to cover all contexts in which a given piece of code might run, or doing just-in-time compilation that you wanted to avoid. That's the primary problem with JavaScript: its semantics are such that it's generally not possible to translate it to typical imperative statically typed code that gives rise to "standard" machine code.
Your question already contains the answer.
It compiles the code into C++, that is of intermediate nature as it is not enough to have C++-Code. You need binaries. So after the compilation to C++, the files are then compiled into binaries. Those are then linked.
The statement only says: We do not compile to binary, but to C++ instead. You need to compile it into a binary with your a C++-Compiler of your choice.
The bundeling happens, if you only put it into the resources (qrc-file). Putting it into the resources does not imply that you use the compiler.
Then there is the JIT compiler, that might (on supported platforms) do a Just-in-Time-Compilation. More on this here

How do I make an extension of qobject cross platform

I'm making a way of getting truly global hotkeys (I.e. emits a signal on certain inputs even when app is out of focus)
This will require different code for win vs osx vs x11. In qt creator how should I go about making this suitable for cross platform development.
Edit: I don't want to know how to do the actual code with x11, windows etc. I just want to know how I would do separate definitions for each platform.
I don't want to know how to do the actual code with x11, windows etc.
I just want to know how I would do separate definitions for each
platform.
It is convenient to do with ifdefs based on pre-defined compiler symbols e.g.:
http://sourceforge.net/p/predef/wiki/OperatingSystems/
#ifdef __linux__ // linux related, GCC has that macros
// your code and/or definitions
#endif
#if defined(__linux__) || defined(macintosh)
// your code and/or definitions
#endif
You may use OR logic there as well, as many things on Mac resemble Linux and vise versa. Mind the compiler, though, if it has that symbol. I would use OR logic with all platform symbols of applicable compilers.
If you're willing to use Qt, it has OS and compiler defines easily available for you:
#include <QtGlobal>
#if defined(Q_OS_LINUX)
// linux-specifc
#elif defined(Q_OS_WIN32) && defined(Q_CC_MSVC)
// win32 and msvc
#endif
Documentation.

Handling Multiple OpenCL Versions and Platforms

Intel recently updated its OpenCL SDK to the 2.0 specification. AMD is still on 1.2, and Nvidia on 1.1. Essentially, this means each GPU platform is now on its own version.
OpenCL does not appear to be designed in the same way OpenGL is in terms of how deprecation works. As far as I know there's no way to request a compatibility version, and Intel even incorporates build errors in its SDK preventing you from calling deprecated functions.
If I want to support every platform using a minimum version (1.1, most likely), what is required of me?
Making only ifdef statements unfortunately doesn't work if you have more than one platform, and they support different OpenCL versions. For instance POCL which installs on the CPU supports 2.0, so you need to have the 2.0 OpenCL headers, but most GPU's and open source drivers, only support OpenCL 1.1 or 1.2.
The best option seems to be to get the OpenCL platform version info, and base what commands are called based on that. Unfortunately it is a char[] so may have to parse it out.
Here is an example of how to get the platform info string.
clGetPlatformInfo(platforms[platform_indexFinger], CL_PLATFORM_VERSION, INFO_LENGTH, &platformInfo, &realSize);
Typically the version info is of the form: "OpenCL 1.2 implementation name"
Here is a little function I made to diagnose the current opencl number
float diagnoseOpenCLnumber(cl_platform_id platform) {
#define VERSION_LENGTH 64
char complete_version[VERSION_LENGTH];
size_t realSize = 0;
clGetPlatformInfo(platform, CL_PLATFORM_VERSION, VERSION_LENGTH,
&complete_version, &realSize);
char version[4];
version[3] = 0;
memcpy(version, &complete_version[7], 3);
// printf("V %s %f\n", version, version_float);
float version_float = atof(version);
return version_float;
}
Can then use it like so, for example with command queue function which were modified for 2.0
float version_float = diagnoseOpenCLnumber(platform_id);
if (version_float >= 2.0) {
command_waiting_line =
clCreateCommandQueueWithProperties(context, device_id, 0, &return_number);
else {
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wdeprecated-declarations"
command_waiting_line =
clCreateCommandQueue(context, device_id, 0, &return_number);
#pragma GCC diagnostic pop
}
AFAIK deprecated functions do not have to be implemented, hence code should check the OpenCL platform version number and avoid calling deprecated functions on that platform. See this earlier discussion: http://www.khronos.org/message_boards/showthread.php/8514-clCreateImage-2D-3D-vs-the-ICD-loader. At present, calling deprecated OpenCL 1.1 functions on AMD or Intel platforms (OpenCL 1.2) still works, but there are no guarantees that this will remain true in the future or on other platforms. I guess that as soon as supporting those deprecated functions becomes too much hassle for the maintainers of an implementation, they'll be removed.
Admittedly, I'm naughty as I have just ignored the problem and continued to use OpenCL 1.1 functions. However, if you are starting a new project (and have the time) then rather wrap the deprecated functions in some sort of generic function that has paths for each version of OpenCL - faster to do it now than later in my opinion. There is a list of frameworks and libraries at http://www.khronos.org/opencl/resources. Perhaps you will find that one of them solves this problem well enough. If not, and if you have enough time then you could build a framework that hides most of the OpenCL functions from your program. Then, as more functions get deprecated you will hopefully only need to change your framework, but not the programs that use it. At the moment, I don't know of any framework that does this for one in C++.
In the header cl.h you'll find a list of definitions like the following:
...
#define CL_VERSION_1_0 1
#define CL_VERSION_1_1 1
#define CL_VERSION_1_2 1
#define CL_VERSION_2_0 1
...
In my case I had annoying warnings about a deprecated function if I was using OpenCL 2.0 to build. So my quick/dirty solution was to do
#ifdef CL_VERSION_2_0
//call 2.0 Function
#else
//call deprecated Function
#endif
Although this might require several fix in your code it's the way to me if you want to compile based on the opencl library available.
Note that if you are using opencl 1.2 you'll get the definition of all the previous versions (so like in the example above CL_VERSION_1_1 and CL_VERSION_1_0 will be defined as well)
Hope this helps

How to have more than one source file with C18 in MPLAB?

In many languages, such as C++, having lots of different source files is normal, but it doesn't seem like this is the case very often with PIC microcontroller programs -- at least not with any of the tutorials or books I've read.
I'm wondering how I can have a source (.c) file with a bunch of routines, global variables and defines in it that can be used by my main.c file. Is this even possible?
Thanks for your advice!
This is absolutely possible with PIC development. Size is certainly a concern both from a code and data perspective but it's still just C code meaning most (see compiler documentation for exceptions) of the rules of C apply including having multiple source files that get compiled and linked into a single output (usually .hex file). For example, in a separate C file from your main.c like test.c:
int AddNumbers(int a, int b)
{
return a + b;
}
You could then define that in a header file test.h:
int AddNumbers(int a, int b);
Include test.h at the top of your main.c file:
#include "test.h"
You should then be able to call AddNumbers(4,5) from main.c. I have not tested this code but provide it simply as an example of the process.
Typically, most code for PIC18 is included from other files. so rather than the high level techniques of compile then link, it is more common to include (and include from includes) all of the code so that there is a single stream going to the compiler. I think you can do it under PIC18, but I never spent enough time to get it to work. Most of the libraries and such are designed as include file, rather than as separately translated units.
It's a different mindset, but there is a reason. I think this is due to the historic need to keep things as small as possible. Therfore, things are done with MUCH more macros based on the chip, and much less (linkable) library development.
PIC32 compiler is much better for its library support.

Resources