I like Linux but I have spare capacity on an enterprise class SPARC Solaris platform. I'm just wondering if anyone has tried running Julia there before as it doesn't seem to be a supported OS.
No Julia does not run SPARC Solaris. Supported platforms are x86 (Linux+windows+mac+FreeBSD), ARM and Power8-LE. A SPARC port would not be too difficult, but would need to be done by someone who cares about that platform, and has access to relevant hardware. Unfortunately, that does not describe most of the current developers and contributors.
Not yet but future interest might also come from wanting to use Julia on FPGAs, in combination with open softcores such as Leon -- a SPARC architecture already supported by LLVM.
Related
Is it possible to run OpenCL on a system designed by a user on a SoC prototyping board? To be more specific, I have a ZedBoard (Xilinx Zynq) that has Dual ARM cores and a Programmable Logic (PL) Area. If I design a simple system of my own that has a video processing accelerator implemented in the logic area, an ARM core and an AXI interconnect, what do I have to do to provide OpenCL support for this simple system? (In this simple system, the ARM core could be the "Host" and the video processing accelerator could be the "device").
I am a student and I have only some basic knowledge about OpenCL. I have researched about my question and have only ended up confusing myself. What are the things that have to be done to provide OpenCL support for a SoC? I understand that this may be a big project, but I need a guideline where to start and how to proceed.
what do I have to do to provide OpenCL support for this simple system?
Implement a OpenCL platform that makes either use of your ARM CPU or the FPGA (or both). I'd say that is pretty much impossible for you; ARM would surely offer one for the CPU if it was easy (and they definitely have the financial means to employ capable engineers/computer scientists), and implementing accelerators on an FPGA requires in-depth knowledge of FPGA development, as well as compiler theory and experience in systems design. I don't want to sound mean, but you seem to have none of these three.
You asked where to get started; I recommend just writing a first accelerator that e.g. adds up a vector of numbers; as soon as you have that, you will have a clearer idea of your task.
If you want to have a look at a reference: The Ettus USRP E310 is a zynq-based SDR device. Ettus has a technology called RFNoC, which allows users to write their own blocks to push data through. Notice that this took quite a few engineers and quite some time to get started. Notice further that it's much easier than implementing something that converts OpenCL to FPGA implementations.
If you have access to the Xilinx tools: Vivado HLS 15.1 System Edition should compile OpenCL kernels. This will also be included in the SDAccel tool suite.
Source: UG973: Vivado Design Suite User Guide Release Notes, Installation,and Licensing
An alternative might be switching to Altera. They provide some good examples for the Altera Cyclone V SoC which is comparable to Xilinx Zynq devices (also includes ARM Cortex-A9) :
AlteraSDK for OpenCL
I am also a student and my current project is also going on a similar direction, i have successfully installed a version of opencl called POCL on the zedboard, it successfully detects the arm cpu of the zedboard. To install pocl, you need llvm and a horde of other things as well. but basic steps to get pocl up on the zedboard are given below:-
Installing pocl:
http://www.hosseinabady.com/install-pocl-opencl
running example:
http://www.hosseinabady.com/embedded-system-by-examples/opencl_embedded_system/opencl-vector-addition
Lots of dependency: can resolved easily
but LLVM make sure you install 3.4 version for pocl 0.9
Steps to install llvm
https://github.com/pacs-course/pacs/wiki/Instructions-to-install-clang-3.1-on-ubuntu-12.04.1-and-12.10
POCL 0.9 is successfully working for me, as you do the installation you will face many other missing dependencies like hwloc, mesa libraries, open gl/cl headers icd loaders i hope you can resolve them as its a very big list to put up in stack overflow.
In order to detect your fpga as an open cl device, thats not going to be a trivial thing to do, you can refer to this link question i posted on github
https://github.com/pocl/pocl/issues/285
and also a research paper published by hosseinbady found on the publications link on the pocl website
http://pocl.sourceforge.net/publications.html
hope this helps you
Try the ARM OpenCL SDK. The Zedboard has an ARM A9 CPU, this should have a NEON SIMD vector unit http://www.arm.com/products/processors/technologies/neon.php which can run OpenCL. See http://www.arm.com/products/multimedia/mali-technologies/opencl-for-neon.php.
The Zedboard isn't listed as an OpenCL conformant platform https://www.khronos.org/conformance/adopters/conformant-products#opencl.
So there is a chance the ARM driver will not work.
Good luck!
If still relevant, try this paper OpenCL on ZYNQ [PDF]
Also note that Zynq-7000 is listed on https://www.khronos.org/conformance/adopters/conformant-products#opencl ( OpenCL_1_0 ), hence the compatibility.
I need to know if ROS can work with all kits? or needs specific requirements?
I mean, can I buy any kit and control it by ROS?
If yes, is there any needed chip, ports, or connectors?
Thanks in advance.
You should use http://answers.ros.org/questions/ for questions regarding ROS, but yes, it is in general robot-agnostic.
No, not all robots. Only robots with X86 or ARM hardware that can run Ubuntu Linux.
There are also experimental versions of ROS for OS X, Gentoo Linux, Arch Linux, and Android (NDK)
When choosing your hardware platform, consider ROS support for various sensors and actuators, as well as the library of packages that add other capabilities.
Here's a very long list of robots that use ROS.
Given the availability of a new workstation (Intell Xeon X5690, Windows 7 Professional, 64-bit) for numerical analysis of fluid dynamics models, I find it a shame not engage in parallel computing. So far, I have had no or little experience in this field.
What's the difference between MS-MPI and the latest release of MPICH suitable for Windows? I installed MPICH 1.4.1, but I cannot get a test program to work on Ifort. How am I supposed to compile the program? Do I have to change Ifort configurations somehow to add the libraries of MPICH? Isn't there any good manual available online that could meet my needs?
There's lots of questions in this one question, but it all boils down to one basic question: How do I install MPI on Windows?
MPICH has long since worked on Windows. The last version that supported it was 1.4.1p1 as you've found, but it doesn't have any support anymore from the MPICH developers so if you have trouble, you probably won't find much help. I haven't seen anyone on here step up to help with those questions so far.
MS-MPI is a good option if you want to use Windows. It's free to use and still has support directly from Microsoft. You'll have to read their documentation about how to set everything up correctly, but it's probably the right place to start if you want to use MPI on Windows.
Intel MPI also works on Windows, but it isn't free so you might not want to look at that right now.
I am trying to install MPI for Windows 8, so when I searched net I got steps for installing it on XP/7 but not for windows 8. The link is: http://swash.sourceforge.net/online_doc/swashimp/node9.html
But firstly when I have to allow mpi.exe and smpd.exe to communicate through firewall these exe files are not listed.
Secondly, when I run cmd(as administrator) and type : "smpd -install",
it says : "Unknown option: -install". I guess the command for windows 8 is something else.
So I will be really grateful if anyone helps with it because I'm not able to proceed further.
Side note before I start, MPI is a standard, not a library that you install. MPICH, Open MPI, Intel MPI, MS-MPI, etc. are all implementations of that standard. When you say you're trying to do X with MPI and you're asking for help, mention which implementation (and version) you're using.
Based on your question, I'm assuming that you're trying to install MPICH, though which version is unclear. MPICH hasn't supported Windows since version 1.4.1p and even that version doesn't have any support from the MPICH team anymore as all of the Windows experts are now gone. I'd suggest that you take a look at one of the implementations that do currently support Windows. The only two I know of are MS-MPI (free) and Intel MPI (paid - Update: Now free for most users), though there are probably others out there that I don't know about. If you still have trouble after trying one of those implementations, they have their own support teams that can help you with your problem.
I am not sure which version of MSMPI you were talking about but here is the webpage you should download the latest MSMPI which also supports Win 8.1.
You just need to double click and follow the instructions of installer.
I want to study and compare executable file structure of elf, SPARC and PA-RISC.
To perform the studies I want to install OpenSolaris on an Intel machine (Core2Duo).
But I got a basic doubt will it work at all ?
I know SPARC has its own assembly - grew in suspicion if it will work or is valid thought at all.
I was aiming to write some programs disassemble them and with some help of tools study the file structures.
I don't have any clue how to perform all this for HP-UX (PA-RISC); dont know any free OS for PA-RISC.
You won't be able to run Sparc or PA-RISC executables on an intel processor. However, if all you want to do is to analyse the structure of these executables, all you need is suitable development tools.
I haven't checked, but I suspect OpenSolaris comes with development tools capable of analysing Solaris/sparc executables out of the box. But even other toolchains can do that. For example, GNU binutils (specifically the BFD library they use) support many architectures, including Sparc and PA-RISC. (If you use GNU binutils, make sure you get a full version, perhaps labeled as “for cross-compilation”, e.g. binutils-multiarch on Debian or Ubuntu)
SPARC:
I have never installed OpenSolaris on anything. You might consider trying NetBSD: it runs SPARC machines at least as well as Solaris did, and it uses ELF format executables. The source code is freely available for study, too.
You will need to understand the ELF file format. I don't recall any particular document standing out back in the days when I wanted to understand ELF, and it looks like Google can offer a large number of web sites that will explain ELF. My advice on ELF is to write a program to read the ELF headers, and then dump them out in a readable text format, even though many such programs already exist.
You will also need a SPARC disassembler that understands ELF. I wrote one a long time ago, it will probably work reasonably well today. http://www.stratigery.com/elf_dis.tar.Z
You can download PDFs about SPARC here: http://www.sparc.com/specificationsDocuments.html I recommend the SPARC V8 and V9 architecture manuals.
PA-RISC:
This is a very odd architecture, with very little in the way of documentation. I believe that PA-RISC was Apollo Computer's (R.I.P) RISC architecture, then HP bought Apollo in 1990 or 1991. The stack grows down and the heap grows up, where just about everything else has it the other way around. It also has a segment register, but one that works differently than x86 segmentation.
HP is really the only place to find anything about PA-RISC.
There are ports for PA-RISC architectures of Linux, NetBSD and OpenBSD.
You cannot run code compiled for Sparc or PA-RISC on an x86 system, unless you use a full-fledge emulator. Qemu can emulate a Sparc-based machine, with enough accuracy for running a Linux operating system on it (but it will not be fast: Qemu must interpret all Sparc opcodes one by one, and this has a heavy overhead, so a fast PC from 2011 may perhaps yield the performance of a Sparc workstation from 1996). There is an ongoing project for adding PA-RISC support to Qemu but it does not seem to have reach any non-trivial level of usability yet.