I'm a newbie in OpenCL.
Now I'm trying to compile one of the NVIDIA OPENCL SDK CODE SAMPLES named "OpenCL Bandwidth Test" linked here (https://developer.nvidia.com/opencl).
In this sample, a file named "oclBandwidthTest.cpp" is included.
And this file consults "oclUtils.h" and "shrQATest.h", so I added these two files path in makefile.
But when I try to compile it it still says "undefined reference to 'shrLog' 'shrLogEx' 'oclErrorString'" ... (too many).
I must do it until tomorrow but from last friday I'm still bound it.
I'm working on Ubuntu 12.04, I already installed SDK 4.2 and device driver.
Let me know what I must include (header file or library) in makefile.
II.B. Linux Installation Instructions
The OpenCL SDK samples in the NVIDIA GPU Computing SDK require a GPU with CUDA Compute
Architecture to run properly. For a complete list of CUDA-Architecture compute-enabled GPUs,
see the list online at: http://www.nvidia.com/object/cuda_learn_products.html
The OpenCL applications in the NVIDIA GPU Computing SDK require version 258.19 of the NVIDIA
Display Driver or later to run on 32 bit or 64 bit Linux. This required driver is made available to
registered developers at: https://nvdeveloper.nvidia.com/login.asp?action=login
Please make sure to read the Driver Installation Hints Document before you
install the driver: http://www.nvidia.com/object/driver_installation_hints.html
Uninstall any previous versions of the NVIDIA GPU Computing SDK
Install the NVIDIA GPU Computing SDK by running the installer provided for your OS.
The default installation folder for the OpenCL SDK is:
Linux
$(HOME)/NVIDIA_GPU_Computing_SDK/
In the following we will refer to the path that the SDK is installed into as .
Build the 32-bit or 64-bit (match the installation OS), release and debug
configurations, of the entire set of SDK projects and utility dependencies.
a. Go to /OpenCL
b. Build:
release configuration by typing "make".
debug configuration by typing "make dbg=1".
Running make at the top level first builds the shared and common utility libraries used by
the SDK samples (these libraries are simply for convenience and are not part of the OpenCL
distribution and are not required for your own OpenCL programs). Make then builds each
of the projects in the SDK.
Run the examples from the release or debug directory located in
/OpenCL/bin/linux/[release|debug].
Most of the SDK applications output messages to a console window that are of interest from the
standpoint of understanding basic OpenCL program flow, and several of the applications generate
graphics output in a separate OpenGL window.
Many of the SDK applications present some timing information useful for obtaining an
overall perspective of program structure and flow and the time required for setup and execution of
significant functions. The SDK example code, however, has generally been simplified for instructional
purposes and is not optimized. Advanced optimization techniques are beyond the scope of this SDK, and
any timing information presented by the samples is not intended for such usage as benchmarking.
All of the applications additionally log all the console information to a session log file in the
same directory as the executables. Those files are named clearly after the name of the sample app,
but with a .txt extension.
For convenience, the Makefile in /OpenCL can be used to execute all
SDK samples sequentially by typing "make runall" or "make dbg=1 runall".
Related
I'm working on some OpenCL code within a larger project. The code only gets compiled at run-time - but I don't want to deploy a version and start it up just for that. Is there some way for me to have the syntax of those kernels checked (even without consider), or even compile them, at least under some restrictions, to make it easier to catch errors earlier?
I will be targeting AMD and/or NVIDIA GPUs.
The type of program you are looking for is an "offline compiler" for OpenCL kernels - knowing this will hopefully help with your search. They exist for many OpenCL implementations, you should check availability for the specific implementation you are using; otherwise, a quick web search suggests there are some generic open source ones which may or may not fit the bill for you.
If your build machine is also your deployment machine (i.e. your target OpenCL implementation is available on your build machine), you can of course also put together a very basic offline compiler yourself by simply wrapping clBuildProgram() and friends in a basic command line utility.
I am trying to build a cross platform (vista, xp, mac, Linux).
I need to put the application in the USB drive formatted in FAT-32 and it should run on any OS computers.
Planning to use Java/JavaFx to do it.
Any advice how we can run on the multiple platforms.
Hi, Can anyone advice use of uber-jar for the above requirment. Would that be good fit.
A few things to take into consideration:
The USB must be formatted with a filesystem compatible with all the OS you need to work with.
A Java application would be able to run on any OS that is able to run Java, but each OS needs a different Java runtime. There's a Java runtime for Linux, one for Windows, one for OSX, etc.
My suggestion would be to define which OS you want to support and create launcher scripts for each one of them on the root of the USB. For instance you would have at least a couple like: myapp.cmd (for Windows), myapp.sh (for Linux), etc.
Additionally you may want to have different Java Runtimes in the same USB, so with the launcher scripts you execute your Java application running it with the corresponding JRE in the USB filesystem.
A twist in the launcher script would be to somehow check if the OS has already a JRE available (Like checking for a variable JAVA_HOME in the environment, or checking the output of "java -version") and act accordingly (although, running a Java application from your own JRE would be safer).
I currently have a Java application packaged in an RPM that gets built for 32-bit RedHat platforms, and I want to create a 64-bit RPM, which is largely just the same as the 32-bit one, but with a couple different .so files included. All the Java stuff is the same on both platforms, so it's just JNI .so's.
My question is: Is it possible to have rpmbuild on a 32-bit system generate both the 32-bit and 64-bit RPMs (from different .spec files) since it's just repackaging already-built components, or do I need to build the 64-bit RPM on a 64-bit system?
N.B. I'm not actually building anything native on the system. I'm just repackaging stuff that's already built.
... or vice versa, can I build a 32-bit one on a 64-bit system? I really would prefer just to build and package this on one system than have two separate builds run for the separate RPMs.
As Aaron stated you can build an RPM for multiple distros on the same machine (64-bit), but you have to be very careful or you can run into issues. The biggest problem I've run into is you build on RHEL 5, then you try to deploy to RHEL 6, since RHEL 6 has a different version of RPM installed, it can cause conflicts and fail to install. So in this scenario you have a few options:
Build the RPM on two machines, you've stated you don't really want to do this.
If you have the disk space, configure Mock, I've used it a ton before and it's really easy to get going as long as you have the disk space and the package spec was designed to pull in requires properly.
Personally I'd give Mock a shot, it's quite simple to set up, and will allow you to do what you want with minimal effort as long as the proper repos are available. In the event the build fails the log is pretty comprehensive regarding what the RPM build error was.
I am involved in development of a large cross platform project that build for Windows, Linux, and Mac OS X. The build for the software is configured with CMake.
The CMake scripts have been designed to configure successfully for Visual Studio on Windows, and Makefiles are currently used for building on Linux and Mac OS X.
Pretty much all of the development for the project so far has been done with people working on Windows, and a little bit of work on Linux. I am interested in developing for the project using Xcode 4.6 on a Macintosh running Mac OS X 10.7, and I have encountering problems as the CMake files do not seem to configure properly for that development environment.
For non-windows platforms many custom commands have been written to try to configure things such as copying needed files or setting environments that are needed for certain operations such as running unit tests during the build process.
It seems that because Xcode is an integrated development environment simliar to Visual Studio is has this concept of a build configuration, and when software gets build output files in up in a directory path that includes that configuration concept (i.e. many build files end up in a path that ends with folder named something like Debug, Release, etc.)
CMake is supposed to have support for dealing with this build configuration concept and the mechanism utilized work well for Visual Studio. That do no seem to work for Xcode. For example our build engineers have design CMake scripts so that for Windows, many path and whatnot are configured using the CMAKE_CFG_INTDIR value which helps to qualify the build configuration.
The use of CMAKE_CFG_INTDIR is not working for Xcode as the script for Macintosh were written with Makefiles in mind which don't really have the build configuration concept. The use of CMAKE_CFG_INTDIR within custom commands used to configure things fails on the Macintosh as the value resolves to $(CONFIGURATION)$(EFFECTIVE_PLATFORM_NAME). This values are not define when the custom commands are run, so values are not set properly and build operations fail.
It is unclear what is needed so that the system can successfully configure for Xcode. Searching on the Internet so far has not yielded insight into what should be used to make sure that build configuration can be successful. What resources are available that would help in figuring out how to configure this project to build with Xcode?
If you're talking about custom commands set using add_custom_command, then you should prefer "generator expressions" to avoid issues regarding per-configuration build directories. From the docs for add_custom_command:
Arguments to COMMAND may use "generator expressions" with the syntax "$<...>". Generator expressions are evaluated during build system generation to produce information specific to each build configuration.
For example, the build directory for a target called "MyExe" could be referred to as $<TARGET_FILE_DIR:MyExe>
Generator expressions are available in a few CMake commands, not just add_custom_command.
If you have more specific problems, it's maybe worth asking further question(s) with the relevant details.
On windows applications are typically packaged as MSI, on Redhat Linux as RPM, what would be a best open source packaging method that could be used to deploy applications to all platforms including different flavors of unix and windows?
Contents would include exes, unix binaries, java jar files, user data, even database scripts to be run.
(I recognize contents would vary per destination OS, ie. binaries would be different, win exe vs unix binary etc, but for example config files may be the same or in the case of java even the bytecode jars)
Key feature I'd like the packaging to support is different users and permissions for different directories, however I recognize supporting this feature multiplatform may be very difficult.
Rather than build a package that is supposed to work across all of your platforms, which is likely impossible, you should have your build system build different packages for each target platform.
With CPack (It come with CMake) you can create packages for Windows (with NSIS), Linux (rpm and deb), and OS X with "make package". CMake also simplify cross-platform building.
For a sample you can look at avogadro's CMakeLists.txt and AvoCPack.cmake
I have a client that uses IzPack to create a single installer (it's Java-based) that installs their app on Windows, OS X and Linux.
http://izpack.org/
NSIS is an open-source solution which, as far as I know is able to build installers that run on Windows and UNIX-likes alike. However, for software deployment on Windows (especially in corporate environments) MSI is the way to go and NSIS is more of a headache.
So I wouldn't advise that you try to build a single package/installer for different platforms. But rather, as RibaldEddie indicated, multiple packages: one for each platform. That also allows to restrict the contents of the package to the files relevant to each platform.
If you'd like to support packaging for multiple distributions, I'd suggest helping the packagers for those distributions out; use some sort of well-known build system for your software (GNU's autotools or something like scons or waf), and document the build, optional dependencies, and so forth pretty well.
That way, when a Debian, Ubuntu, Red Hat, SuSE, whatever, packager comes along, they'll be able to create the package for you. You can optionally include packaging templates for one or more distributions in a separate VCS tree that is available, if you'd like.
If you are looking at packaging a closed-source/proprietary application for multiple systems, you'd probably do best to package up a .tar.gz file and document the installation process for it. You'll also want to make sure that the build process used doesn't embed any path information into the application, so that it can be run in /opt, /usr, or /usr/local, which are some popular choices for third-party add-on software.
BitRock InstallBuilder allows you to create installer packages for each one of the platforms you mentioned (as well as creating RPM, DEB, packages etc. from a single project file)