How to get whether OS is 32 bit or 64 bit by UNIX command? - unix

How will you get to know the bits of operating system? Thanks in advance.

In linux, the answer to such a generic question is just using
uname -m
or even:
getconf LONG_BIT
In C you can use the uname(2) system call.
In windows you can use:
systeminfo | find /I "System type"
or even examine the environment:
set | find "ProgramFiles(x86)"
(or with getenv() in C)

Original question:
How to know the bits of operating system in C or by some other way?
The correct way is to use some system API or command to get the architecture.
Comparing sizeof in C won't give you the system pointer size but the target architecture's pointer size. That's because most architectures/OSes are backward compatible so they can run previous 16 or 32-bit programs without problem. A 32-bit program's pointer is still 32-bit long even on 64 bit OS. And even on 64-bit architectures, some OSes may still use 32-bit pointers such as x32-abi

if you use c, you can get sizeof(void*) or sizeof(long) .if =8 then 64bits else 32bits.It's the same to all arch.

I'm so sorry for my carelessness and mistake.It's only for linux. In Linux Device Driver 3rd,11.1 section: Use of Standard C Types. It says
The program can be used to show that long integers and pointers
feature a different size on 64-bit platforms, as demonstrated by
running the program on different Linux computers:
arch Size: char short int long ptr long-long u8 u16 u32 u64
i386 1 2 4 4 4 8 1 2 4 8
alpha 1 2 4 8 8 8 1 2 4 8
armv4l 1 2 4 4 4 8 1 2 4 8
ia64 1 2 4 8 8 8 1 2 4 8
m68k 1 2 4 4 4 8 1 2 4 8
mips 1 2 4 4 4 8 1 2 4 8
ppc 1 2 4 4 4 8 1 2 4 8
sparc 1 2 4 4 4 8 1 2 4 8
sparc64 1 2 4 4 4 8 1 2 4 8
x86_64 1 2 4 8 8 8 1 2 4 8
And there is some exception.For example:
It's interesting to note that the SPARC 64 architecture runs with a
32-bit user space, so pointers are 32 bits wide there, even though
they are 64 bits wide in kernel space. This can be verified by loading
the kdatasize module (available in the directory misc-modules within
the sample files). The module reports size information at load time
using printk and returns an error (so there's no need to unload it):
#user1437033 I guess Windows isn't compatible with gcc standard .So you maybe get answer from windows' programmers.
#Paul R We should consider it regular code ,right? If you use cross compile tools ,such as arm(it only has 32bits),then you also can't get answer.
Ps:I don't support you use Dev c++ compiler,it's weird in many scenes and isn't standard.Code blocks or vs 2010 may be a good choice.
I hope this can help you.

Related

How does is.null work on list elements in R? [duplicate]

I found a very suprising and unpleasant feature of R - it completes list item names!!! See the following code:
a <- list(cov_spring = "spring")
a$cov <- c()
a$cov
# spring ## I expect it to be empty!!! I've set it empty!
a$co
# spring
a$c
I don't know what to do with that.... I need to be able to set $cov to NULL and have $cov_spring there at the same time!!! And use $cov separately!! This is annoying!
My question:
What is going on here? How is this possible, what is the logic behind?
Is there some easy fix, how to turn this completion off? I need to use list items cov_spring and cov independently as if they are normal variables. No damn completion please!!!
From help("$"):
'x$name' is equivalent to 'x[["name", exact = FALSE]]'
When you scroll back and read up on exact=:
exact: Controls possible partial matching of '[[' when extracting by
a character vector (for most objects, but see under
'Environments'). The default is no partial matching. Value
'NA' allows partial matching but issues a warning when it
occurs. Value 'FALSE' allows partial matching without any
warning.
So this provides you partial matching capability in both $ and [[ indexing:
mtcars$cy
# [1] 6 6 4 6 8 6 8 4 4 6 6 8 8 8 8 8 8 4 4 4 4 8 8 8 8 4 4 4 8 6 8 4
mtcars[["cy"]]
# NULL
mtcars[["cy", exact=FALSE]]
# [1] 6 6 4 6 8 6 8 4 4 6 6 8 8 8 8 8 8 4 4 4 4 8 8 8 8 4 4 4 8 6 8 4
There is no way I can see of to disable the exact=FALSE default for $ (unless you want to mess with formals, which I do not recommend for the sake of reproducibility and consistent behavior).
Programmatic use of frames and lists (for defensive purposes) should prefer [[ over $ for precisely this reason. (It's rare, but I have been bitten by this permissive behavior.)
Edit:
For clarity on that last point:
mtcars$cyl becomes mtcars[["cyl"]]
mtcars$cyl[1:3] becomes mtcars[["cyl"]][1:3]
mtcars[,"cy"] is not a problem, nor is mtcars[1:3,"cy"]
You can use [ or [[ instead.
a["cov"] will return a list with a NULL element.
a[["cov"]] will return the NULL element directly.

Very confusing R feature - completion of list item names

I found a very suprising and unpleasant feature of R - it completes list item names!!! See the following code:
a <- list(cov_spring = "spring")
a$cov <- c()
a$cov
# spring ## I expect it to be empty!!! I've set it empty!
a$co
# spring
a$c
I don't know what to do with that.... I need to be able to set $cov to NULL and have $cov_spring there at the same time!!! And use $cov separately!! This is annoying!
My question:
What is going on here? How is this possible, what is the logic behind?
Is there some easy fix, how to turn this completion off? I need to use list items cov_spring and cov independently as if they are normal variables. No damn completion please!!!
From help("$"):
'x$name' is equivalent to 'x[["name", exact = FALSE]]'
When you scroll back and read up on exact=:
exact: Controls possible partial matching of '[[' when extracting by
a character vector (for most objects, but see under
'Environments'). The default is no partial matching. Value
'NA' allows partial matching but issues a warning when it
occurs. Value 'FALSE' allows partial matching without any
warning.
So this provides you partial matching capability in both $ and [[ indexing:
mtcars$cy
# [1] 6 6 4 6 8 6 8 4 4 6 6 8 8 8 8 8 8 4 4 4 4 8 8 8 8 4 4 4 8 6 8 4
mtcars[["cy"]]
# NULL
mtcars[["cy", exact=FALSE]]
# [1] 6 6 4 6 8 6 8 4 4 6 6 8 8 8 8 8 8 4 4 4 4 8 8 8 8 4 4 4 8 6 8 4
There is no way I can see of to disable the exact=FALSE default for $ (unless you want to mess with formals, which I do not recommend for the sake of reproducibility and consistent behavior).
Programmatic use of frames and lists (for defensive purposes) should prefer [[ over $ for precisely this reason. (It's rare, but I have been bitten by this permissive behavior.)
Edit:
For clarity on that last point:
mtcars$cyl becomes mtcars[["cyl"]]
mtcars$cyl[1:3] becomes mtcars[["cyl"]][1:3]
mtcars[,"cy"] is not a problem, nor is mtcars[1:3,"cy"]
You can use [ or [[ instead.
a["cov"] will return a list with a NULL element.
a[["cov"]] will return the NULL element directly.

arulesSequences cspade function: "Error in file(con, "r") : cannot open the connection"

One day I tried to execute my routine cspade sequences mining in R and it suddenly failed with error and some very strange print to console. Here is the example code:
library(arulesSequences)
data(zaki)
cspade(zaki, parameter=list(support=0.5))
It throws very long output (even with option control=list(verbose=F)) followed by an error:
CONF 4 9 2.7 2.5
MINSUPPORT 2 4
MINMAX 1 4
1 SUPP 4
2 SUPP 4
4 SUPP 2
6 SUPP 4
numfreq 4 : 0 SUMSUP SUMDIFF = 0 0
EXTRARYSZ 2465792
OPENED C:\Users\Dawid\AppData\Local\Temp\Rtmp279Wy5\cspade2cd4751e5905.idx
OFF 9 38
Wrote Offt 0.00099802
BOUNDS 1 5
WROTE INVERT 0.000998974
Total elapsed time 0.00299406
MINSUPPORT 2 out of 4 sequences
1 -- 4 4
2 -- 4 4
4 -- 2 2
6 -- 4 4
1 6 -- 3 3
2 6 -- 4 4
4 -> 6 -- 2 2
4 -> 2 6 -- 2 2
1 2 6 -- 3 3
1 2 -- 3 3
4 -> 2 -- 2 2
2 -> 1 -- 2 2
4 -> 1 -- 2 2
6 -> 1 -- 2 2
4 -> 6 -> 1 -- 2 2
2 6 -> 1 -- 2 2
4 -> 2 6 -> 1 -- 2 2
4 -> 2 -> 1 -- 2 2
Error in file(con, "r") : cannot open the connection
In addition: Warning message:
In file(con, "r") :
cannot open file
'C:\Users\Dawid\AppData\Local\Temp\Rtmp279Wy5\cspade2cd4751e5905.out': No
such file or directory
It looks like it is printing the mined rules to the console (which has never happened before). And it ends with error so I can't write the rules into a variable. Looks like some problem with writing temporary files maybe?
My configuration:
R version 3.5.1 (2018-07-02)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Packages:
arulesSequences_0.2-19
arules_1.6-1
(arulesSequences have new version but on the latest version arulesSequences_0.2-20 it fails in the same way)
Thank you!
One workaround is to use the R console, not Rstudio.
Well, it should work fine then. I see that more people have the same problem. I have tried reinstalling Rstudio together with reinstalling packages and using older Rstudio version but it didn't work.
Hope it helps but I would be grateful for a full answer. Thanks!

What is the difference between a bank conflict and channel conflict on AMD hardware?

I am learning OpenCL programming and running some programs on AMD GPU. I referred the AMD OpenCL Programming guide to read about global memory optimization for GCN Architecture. I am not able to understand the difference between a bank conflict and a channel conflict.
Can someone explain me what is the difference between them?
Thanks in advance.
If two memory access requests are directed to the same controller, the hardware serializes the access. This is called a channel conflict. Which means, each of integrated memory controller circuits can serve to a single task at a time, if you happen to map any two tasks' address to access to same channel, they are served serially.
Similarly, if two memory access requests go to the same memory bank, hardware serializes the access. This is called a bank conflict. If there are multiple memory chips, then you should avoid using a stride of the special width of the hardware.
Example with 4 channels and 2 banks: (not a real world example since banks must be more than or equal to channels)
address 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
channel 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1
bank 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1
so you should not read like this:
address 1 3 5 7 9
channel 1 3 1 3 1 // %50 channel conflict
bank 1 1 1 1 1 //%100 bank conflict,serialized on bank level
nor this:
address 1 5 9 13
channel 1 1 1 1 // %100 channel conflict, serialized
bank 1 1 1 1 // %100 bank conflict, serialized
but this could be ok:
address 1 6 11 16
channel 1 2 3 4 // no conflict, %100 channel usage
bank 1 2 1 2 // no conflict, %100 bank usage
because the stride is not a multiple of channel nor bank widths.
Edit: if your algorithms are more of a local-storage optimized, then you should pay attention to local data store channel conflicts. On top of this, some cards can use constant memory as an independent channel source to speed up reading rates.
Edit: You can use multiple wavefronts to hide conflict-based latencies or you can use instruction level parallelism too.
Edit: Number of local data store channels are much faster and more numerous than global channels so optimizing for LDS (local data share) is very important so uniform-gathering on global channels then scattering on local channels shouldn't be as problematic as scattering on global channels and uniform-gathering on local channels.
http://developer.amd.com/tools-and-sdks/opencl-zone/amd-accelerated-parallel-processing-app-sdk/opencl-optimization-guide/#50401334_pgfId-472173
For an AMD APU with a decent mainboard, you should be able to select an n-way channel interleaving or n-way bank interleaving for your desire if your software is not alterable.

Is R able to compute contingency tables on big file without putting the whole file in RAM?

Let me explain the question:
I know the functions table or xtabs compute contingency tables, but they expect a data.frame, which is always stored in RAM. It's really painful when trying to do this on a big file (say 20 GB, the maximum I have to tackle).
On the other hand, SAS is perfectly able to do this, because it reads the file line by line, and updates the result in the process. Hence there is ever only one line in RAM, which is much more acceptable.
I have done the same as SAS with ad-hoc Python programs on occasion, when I had to do more complicated stuff that either I didn't know how to do in SAS or thought it was too cumbersome. Python syntax and integrated features (dictionaries, regular expressions...) compensate for its weaknesses (speed, mainly, but when reading 20 GB, speed is limitated by the hard drive anyway).
My question, then: I would like to know if there are packages to do this in R. I know it's possible to read a file line by line, like I do in Python, but computing simple statistics (contingency tables for instance) on a big file is such a basic task that I feel there should be some more or less "integrated" feature to do it in a statistical package.
Please tell me if this question should be asked on "Cross Validated". I had a doubt, since it's more about software than statistics.
You can use the package ff for this which uses the hard disk drive instead of RAM but it is implemented in a way that it doesn't make it (significantly) slower than the normal way R uses RAM.
This if from the package description:
The ff package provides data structures that are stored on disk but behave (almost) as if they were in RAM by transparently mapping only a section (pagesize) in main memory.
I think this will solve your problem of loading a 20GB file in RAM. I have used it myself for such purposes and it worked great.
See here a small example as well. From the example on the xtabs documentation:
Base R
#example from ?xtabs
d.ergo <- data.frame(Type = paste0("T", rep(1:4, 9*4)),
Subj = gl(9, 4, 36*4))
> print(xtabs(~ Type + Subj, data = d.ergo)) # 4 replicates each
Subj
Type 1 2 3 4 5 6 7 8 9
T1 4 4 4 4 4 4 4 4 4
T2 4 4 4 4 4 4 4 4 4
T3 4 4 4 4 4 4 4 4 4
T4 4 4 4 4 4 4 4 4 4
ff package
#convert to ff
d.ergoff <- as.ffdf(d.ergo)
> print(xtabs(~ Type + Subj, data = d.ergoff)) # 4 replicates each
Subj
Type 1 2 3 4 5 6 7 8 9
T1 4 4 4 4 4 4 4 4 4
T2 4 4 4 4 4 4 4 4 4
T3 4 4 4 4 4 4 4 4 4
T4 4 4 4 4 4 4 4 4 4
You can check here for more information on memory manipulation.

Resources