are compiler and shell internal part of Unix? - unix

I had this question on my exam, now in diagrams I saw, we have : hardware, kernel, system call interface to the kernel, then (compilers, shells, sys.libs) and on top some applications. Does OS scope include only kernel, and everything else is just some additional functions we choose to install , or does a Unix OS include everything from the list I gave above?

OS have more or less 2 definitions :
academic : OS is soft for doing a abstraction layer between
hardware and software
pragmatic : OS is soft that come with hardware when we buy it.
Compiler and shell don't enter in definition 1. It can be enter in definition 2.
And usually, users that are interesting by a compiler or a shell prefer to consider OS as asbtraction layer (academic definition).

Simple answer, No. They are not an internal part of Unix but additional functionality to help make the Operating System more usable.
The OS scope applies primarily to the kernel only.
Whilst you need a compiler to build the kernel, you don't necessarily require one for the general day to day use of the system. Most operating systems don't ship the compiler by default and instead, the kernel and applications is built on one machine and then the resulting binarys are packaged and distributed either with the computer directly (Windows/Unix) or via the internet for others to download and install (Linux/BSD)
Likewise with the shell. Although all operating systems ship with a default one (sh/bash/dash on Linux|Unix systems, Command Prompt/Powershell on Windows), most general users can go their entire lives without using it.
Having said that, if you were to delete the shell, you'll almost certainly find your system won't boot up. This is because a lot of core start-up scripts rely on the shell to stop / start the services presenting interfaces between the user and the kernel.
In summary:
You need a compiler to build the kernel and applications but not for running the OS.
You need a shell to execute applications (which also includes the compiler)

Related

Can I check OpenCL kernel syntax at compilation time?

I'm working on some OpenCL code within a larger project. The code only gets compiled at run-time - but I don't want to deploy a version and start it up just for that. Is there some way for me to have the syntax of those kernels checked (even without consider), or even compile them, at least under some restrictions, to make it easier to catch errors earlier?
I will be targeting AMD and/or NVIDIA GPUs.
The type of program you are looking for is an "offline compiler" for OpenCL kernels - knowing this will hopefully help with your search. They exist for many OpenCL implementations, you should check availability for the specific implementation you are using; otherwise, a quick web search suggests there are some generic open source ones which may or may not fit the bill for you.
If your build machine is also your deployment machine (i.e. your target OpenCL implementation is available on your build machine), you can of course also put together a very basic offline compiler yourself by simply wrapping clBuildProgram() and friends in a basic command line utility.

Running java application from usb

I am trying to build a cross platform (vista, xp, mac, Linux).
I need to put the application in the USB drive formatted in FAT-32 and it should run on any OS computers.
Planning to use Java/JavaFx to do it.
Any advice how we can run on the multiple platforms.
Hi, Can anyone advice use of uber-jar for the above requirment. Would that be good fit.
A few things to take into consideration:
The USB must be formatted with a filesystem compatible with all the OS you need to work with.
A Java application would be able to run on any OS that is able to run Java, but each OS needs a different Java runtime. There's a Java runtime for Linux, one for Windows, one for OSX, etc.
My suggestion would be to define which OS you want to support and create launcher scripts for each one of them on the root of the USB. For instance you would have at least a couple like: myapp.cmd (for Windows), myapp.sh (for Linux), etc.
Additionally you may want to have different Java Runtimes in the same USB, so with the launcher scripts you execute your Java application running it with the corresponding JRE in the USB filesystem.
A twist in the launcher script would be to somehow check if the OS has already a JRE available (Like checking for a variable JAVA_HOME in the environment, or checking the output of "java -version") and act accordingly (although, running a Java application from your own JRE would be safer).

How to profile an openmp code natively on Intel MIC?

I have an openmp code written in C. I executed the code on Intel MIC on Stampede. I want to profile the code to find the hotspots in the code so that it will be helpful for me to optimize the code further. I tried to use the profiler gprof but I read somewhere that gprof cannot be used on MIC directly. I tried to use perf by going through tutorial. I could go till a certain step after which when the perf annotate step comes and I execute the code, it gives me the error ")" unexpected. So I am not knowing how to proceed to profile my code. Can anybody please help ??
This is the site where I referred to the perf tutorial : sandsoftwaresound.net/perf/perf-tutorial-hot-spots/ .
80% of optimization for the Xeon Phi is the same as for the host (Xeon). Use gprof, printf, compiler options, and the rest of your toolkit and carry your optimization as far as you can executing your code on the host only. After you can do no more, then focus on specific Xeon Phi optimizations.
As you are on Stampede, I assume you are using the Intel compiler. The compiler has a lot of diagnostic capabilities to profile your code and even provide suggestions. I'd provide you with more specific URLs but am on vacation with limited bandwidth.
Though this isn't specific to your question, here are some other suggestions. If you aren't, you'll most likely get a substantial boost using it. Intel compilers are danged good at optimizations, especially on Intel architectures. Also, you should use Intel MKL where possible. All of MKL's routines are optimized for the different IA architectures, and the most relevant to HPC are optimized specifically for MIC.
You have a few options.
The heavyweight approach is to use Intel Vtune. Firstly add -g to your compiler flags.
I use Vtune from the host command line quite a bit, here is the command I use to profile an application on the MIC. (This is executed on the host machine, Vtune on the host uses ssh
to launch the application on the MIC.)
amplxe-cl -collect knc-hotspots -source-search-dir=/mysrc/dir -search-dir=/mybin/dir -- ssh mic0 /home/me/myapp
Assume the app on the MIC is at /home/me/myapp, and the source dir and source search dir on the host. (With Vtune update 15 at least, I need to specify both of these separately in order to get the Vtune GUI to show me symbol info)
Once your app has finished, run the Vtune GUI on the host with amplxe-gui and open your result set.
There are also some simplified open source profiling tools developed by Intel that support the MIC, Speedometer and Overhead, you can find information about them here
Hopefully this is enough info to get you started.

retrieving IPV6 route flags programmatically?

Is there a way to obtain flags of the IPV6 routing table through any API in Linux?
Netlink socket doesn't show any place for flags.
After checking route command's source code in net-tools it seems that it reads route from proc filesystem, i am wary of doing this, as it seems to be OS flavor dependent.
/proc is a and emplementation of procfs for Linux originally introduced in 1992. Every effort is made to maintain compatibility with applications using the filesystem. Including for alternative operating systems such as FreeBSD that provides a linprocfs filesystem to maintain binary compatibility with Linux applications.
The same software for net-tools is used on all major Linux systems, so there should be no issues with using those /proc files for system information.

Wrong Architecture when running executable from Xcode4 on UNIX

First of all, I'm very new to programming.
I have a build a program using Xcode 4 on Snow Leopard.
Architecture of the project is set to "Standard (32/64-bit intel)"
Afterwards I have exported the executable file to a UNIX computer for running.
ssh to that computer
Typing ./programname in the terminal (Of the UNIX computer) gives the following response:
Exec format error. Wrong Architecture.
The program runs just fine on my Mac laptop.
When you compile a program it will (*) be compiled for a specific platform and a specific operating system. It will also most likely be compiled against a specific set of libraries. Usually those parameters are exactly those of the computer doing the compilation (the other cases are called cross-compilation).
In other words: compiling a program on a Mac will produce a binary that runs only on a Mac (unless, again, you're doing cross-compilation). Your UNIX system (which UNIX, by the way?) has a different operating system, different libraries and probably even a different CPU architetcture.
Somewhat related: Apples advertised (or used to advertise) Mac OS X as a UNIX. While Mac OS X is certainly a UNIX-class operating system, that doesn't mean that it's binary compatible with every other UNIX-class OS out there.
* almost always, with the exception of systems designed to avoid this (e.g. Java)
Programs compiled by XCode will only run under MacOS X. Unless the "UNIX computer" in step 2 is running MacOS, the program will not be able to run.

Resources