I need to create a simple application that is multi-platform that will send data over serial connections without much fuss for me or end users. The sole purpose is to be able to read and write tables of data to an Arduino over a USB interface (that presents itself as a serial interface).
I have some experience with python, perl, and php for what it's worth.
Thanks,
Luke
As the Languages you noted are all interpreted you need an Interpreter on the Platforms. If you don't dare installing a pee-family interpreter on a windows machine which is the most difficult platform i think then you can use it.
If you want real multi-platform you should consider using a compiled Language like C, which you compile for the different versions, or use Java and run it on top of the JRE which is available on most platforms.
Related
I started to learn OpenCl.
I read these links:
https://en.wikipedia.org/wiki/OpenCL
https://github.com/KhronosGroup/OpenCL-Guide/blob/main/chapters/os_tooling.md
https://www.khronos.org/opencl/
but I did not understand well that OpenCl is a library by including header file in source code or it is a compiler by using OpenCl C Compiler?!
It is both a library and a compiler.
The OpenCL C/C++ bindings that you include as header files and that you link against are a library. These provide the necessary functions and commands in C/C++ to control the device (GPU).
For the device, you write OpenCL C code. This language is not C or C++, but rather based on C99 and has some specific limitations (for example only 1D arrays) as well as extensions (math and vector functionality).
The OpenCL compiler sits in between the C/C++ bindings and the OpenCL C part. Using the C/C++ bindings, you run the compiler at runtime of the executable with the command clBuildProgram (C bindings) or program.build(...) (C++). It then at runtime once compiles the OpenCL C code to the device-specific assembly, which is different for every vendor. With an Nvidia GPU, you can for example look at the Compiler PTX assembly for the device.
If you know OpenGL then OpenCL works on the same principle.
Disclaimer: Self-learned knowledge ahead. I wanted to learn OpenCL and found the OpenCL term as confusing as you do. So I did some painful research until I got my first OpenCL hello-world program working.
Overview
OpenCL is an open standard - i.e. just API specification which targets heterogeneous computing hardware in particular.
The standard comprises a set of documents available here. It is up to the manufacturers to implement the standard for their devices and make OpenCL available e.g. through GPU drivers to users. Perhaps in the form of a shared library.
It is also up to the manufacturers to provide tools for developer to make applications using OpenCL.
Here's where it gets complicated.
SDKs
Manufacturers provide SDKs - software packages that contain everything the said developer needs. (See the link above). But they are specific for each - e.g. NVIDIA SDK won't work without their gpu.
ICD Loader
Because of SDKs being tied to a signle vendor, the most portable(IMHO) solution is to use what is known as Khronos' ICD loader. It is kind of "meta-driver" that will, during run-time, search for other ICDs present in the system by AMD, Intel, NVIDIA, and others; then forward them calls from our application. So, as a developer, we can develop against this generic driver and use clGetPlatformIDs to fetch the available platforms and devices. It is availble as libOpenCL.so, at least on Linux, and we should link against it.
Counterpart for OpenGL's libOpenGL, well almost, because the vast majority of OpenGL(1.1+) is present in the form of extensions and must be loaded separately with e.g. GLAD. In that sense, GLAD is very similar to the ICD loader.
Again, it does not contain any actual "computing" code, only stub implementations of the API which forward everything to the chosen platform's ICD.
Headers
We are still missing the headers, thankfully Khronos organization releases C headers and also C++ bindings. But nothing is stopping you from writing them yourself based on the official API documents. It would just be really tedious and error-prone.
Here we can find yet another parallel with OpenGL because the headers are also just the consequence of the Standard and GLAD generates them directly from its XML version! How cool is that?!
Summary
To write a simple OpenCL application we need to:
Download an ICD from the device's manufactures - e.g. up-to-date GPU drivers is enough.
Download the headers and place them in some folder.
Download, build, and install an ICD loader. It will likely need the headers too.
Include the headers, use API in them, and link against the ICD loader.
For Debian, maybe Ubuntu and others there is a simpler approach:
Download the drivers... look for <vendor>-opencl-icd, the drivers on Linux are usually not as monolithic as on Windows and might span many packages.
Install ocl-icd-opencl-dev which contains C, C++ headers + the loader.
Use the headers and link the library.
With Google Native Client, can the source code be protected so that, unlike JavaScript, it is not visible in the client?
If so, how? Thanks!
As the name says, Google Native Client uses native code.
That means, your code is compiled, just like with your average executable binary on the desktop. It can be disassembled, but the source code can't be recovered.
Native client means that you are running native code on the client. In most cases, you'll be running i386 or amd64 machine language on your client. If you're using a compiled language, then your users cannot directly recover it. Users could disassemble your software to recover some information about your code, but they cannot recover the original source code (unless it is assembly language). Rewriting a piece of software from the disassembled binary is difficult, but given enough time, it can usually be done. It really depends on how paranoid you are about the people using your code.
Native Client's structural requirements to enable reliable disassembly so that it can perform static analysis can make some techniques for code obfuscation unusable. These are often the same techniques used by malware to make malware analysis difficult, i.e., have two valid interpretations of the instruction stream if decoded by different offsets. Native Client does, however, permit a form of self-modifying code since it has JIT support. Mono uses just-in-time code generation, for example, and the same interfaces can be used to create obfuscated code, as long as the JIT'ted code continue to conform to the NaCl security requirements.
Using the JIT interface would of course make your code non-portable to other CPU architectures.
I would want to compile existing software into presentation that can later be run on different architectures (and OS).
For that I need a (byte)code that can be easily run/emulated on another arch/OS (LLVM IR? Some RISC assemby?)
Some random ideas:
Compiling into JVM bytecode and running with java. Too restricting? C-compilers available?
MS CIL. C-Compilers available?
LLVM? Can Intermediate representation be run later?
Compiling into RISC arch such as MMIX. What about system calls?
Then there is the system call mapping thing, but e.g. BSD have system call translation layers.
Are there any already working systems that compile C/C++ into something that can later be run with an interpreter on another architecture?
Edit
Could I compile existing unix software into not-so-lowlevel binary, which could be "emulated" more easily than running full x86 emulator? Something more like JVM than XEN HVM.
There are several C to JVM compilers listed on Wikipedia's JVM page. I've never tried any of them, but they sound like an interesting exercise to build.
Because of its close association with the Java language, the JVM performs the strict runtime checks mandated by the Java specification. That requires C to bytecode compilers to provide their own "lax machine abstraction", for instance producing compiled code that uses a Java array to represent main memory (so pointers can be compiled to integers), and linking the C library to a centralized Java class that emulates system calls. Most or all of the compilers listed below use a similar approach.
C compiled to LLVM bit code is not platform independent. Have a look at Google portable native client, they are trying to address that.
Adobe has alchemy which will let you compile C to flash.
There are C to Java or even JavaScript compilers. However, due to differences in memory management, they aren't very usable.
Web Assembly is trying to address that now by creating a standard bytecode format for the web, but unlike the JVM bytecode, Web Assembly is more low level, working at the abstraction level of C/C++, and not Java, so it's more like what's typically called an "assembly language", which is what C/C++ code is normally compiled to.
LLVM is not a good solution for this problem. As beautiful as LLVM IR is, it is by no means machine independent, nor was it intended to be. It is very easy, and indeed necessary in some languages, to generate target dependent LLVM IR: sizeof(void*), for example, will be 4 or 8 or whatever when compiled into IR.
LLVM also does nothing to provide OS independence.
One interesting possibility might be QEMU. You could compile a program for a particular architecture and then use QEMU user space emulation to run it on different architectures. Unfortunately, this might solve the target machine problem, but doesn't solve the OS problem: QEMU Linux user mode emulation only works on Linux systems.
JVM is probably your best bet for both target and OS independence if you want to distribute binaries.
As Ankur mentions, C++/CLI may be a solution. You can use Mono to run it on Linux, as long as it has no native bits. But unless you already have a code base you are trying to port at minimal cost, maybe using it would be counter productive. If it makes sense in your situation, you should go with Java or C#.
Most people who go with C++ do it for performance reasons, but unless you play with very low level stuff, you'll be done coding earlier in a higher level language. This in turn gives you the time to optimize so that by the time you would have been done in C++, you'll have an even faster version in whatever higher level language you choose to use.
The real problem is that C and C++ are not architecture independent languages. You can write things that are reasonably portable in them, but the compiler also hardcodes aspects of the machine via your code. Think about, for example, sizeof(long). Also, as Richard mentions, there's no OS independence. So unless the libraries you use happen to have the same conventions and exist on multiple platforms then it you wouldn't be able to run the application.
Your best bet would be to write your code in a more portable language, or provide binaries for the platforms you care about.
The article Porting Qt for Embedded Linux to Another Operating System lists five things you have to do to port Qt for Embedded Linux to another OS. From the article:
There are several issues to be aware of if you plan to do your own port to another operating system. In particular you must resolve Qt for Embedded Linux's shared memory and semaphores (used to share window regions), and you must provide something similar to Unix-domain sockets for inter-application communication. You must also provide a screen driver, and if you want to implement sound you must provide your own sound server. Finally you must modify the event dispatcher used by Qt for Embedded Linux.
Is it really this easy to port Qt to another OS, or have i missed some information?
Another important component to port would be QAtomic, to ensure that you can have atomic operations and implicit sharing working well. See also
http://labs.trolltech.com/blogs/2007/08/28/say-hello-to-qatomicint-and-qatomicpointer/
Since Qt has been ported a large number of times it seems logical that it would be inherently simple. However the issue really is on the platform you are porting to and how many features it currently supports.
Assuming you find all those things easy, then the port is easy.
After investigating this in more detail I have come to the conclusion that the article "Porting Qt for Embedded Linux to Another Operating System" assumes that you are porting Qt to a very "linux-like" OS.
I have attempted this and currently making progress.
Some difficulties:
IDE - I have to manually add all Qt files and fight the compiler with #ifdefs until it builds with all dependencies in place.
Linux(ness) - I've had to disable all Linux/Windows things that are not supported in my target OS: threads, sockets, processes. Even the timers are slightly different.
Tips:
Start small : I compiled QtCore as a standard lib within my IDE, next up is QtGui which is a behemoth compared to QtCore.
I plan to run only a single QThread, so I have to artificially made a Thread object to avoid null pointers. You cannot compile out Thread information as it is key to all QObjects.
So far I have an qeventloop running within a qcoreapplication.
I wrote some inline assembly but had serious difficulties with my IDE and compilation. I left it in C++ and let the assembler handle it for me. Because I am single-threaded, I am not too concerned with shared data/ exclusive access as required by the atomic operations.
I would like to experiment with ideas about distributed file synchronization/replication. To make it efficient when the user is working, I would like to implement some kind of daemon to monitor changes in some directory (e.g. /home/user/dirToBeMonitored or c:\docs and setts\user\dirToBeMonitored). So, I could be able to know which filename was added/changed/deleted at every time (or within a reasonable interval).
Is this possible with any high-medium level language?. Do you know some API (and in which language?) to do this?
Thanks.
The APIs are totally different for Windows, Linux, Mac OS X, and any other Unix you can name, it seems. I don't know of any cross-platform library that handles this in a consistent way.
A bonified answer, albeit one that requires a largish library dependency (well-worth it IMO)!
QT provides the QFileSystemwatcher class, which uses the native mechanism of the underlying platform.
Even better, you can use the QT language bindings for Python or Ruby. Here is a simple PyQT4 application which uses QFileSystemWatcher.
Notes
A good reference on on creating deployable PyQT4 apps, especially on OSX but should work for Windows also.
Same solution previously posted here.
Other cross-platform toolkits may also do the trick (for example Gnome's GIO has GFileMonitor, although it is UNIX only and doesn't support OSX's FSEvents mechanism afaik).
In Linux it is called inotify.
And on OS X it's called fsevents. It's an OS-level API, so it's easiest to access from C or C++. It should be accessible from nearly any language, although bindings for your preferred language may not have been written yet.