Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Are CPU registers and CPU cache different?
Yes, CPU register is just a small amount of data storage, that facilitates some CPU operations.
CPU cache, it is a high speed volatile memory which is bigger in size, that helps the processor to reduce the memory operations.
It is not very inaccurate to think of the processor's register as the level 0 cache, smaller and faster than the other layers of cache in-between the processor and memory. The difference is only that from the point of view of the instruction set, cache access is transparent (the cache is accessed through a memory address that happens to be a cached address at the moment) whereas registers are explicitly referenced in each instruction.
registers are special temporary storage locations within the CPU that very quickly accept,store and transfer data and instructions that Are immediately used.cache memory is a very fast used by the CPU of computer that is used to frequently request data and instructions
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
I'm trying to measure the CPU usage on a dual core ARM Cortex A9 processor to benchmark a it. It will be used a router. I enabled ip forwarding and I'm running iperf tests along with monitoring the CPU usage using top. I'm a little confused about interpreting the results from top.
Cpu0 : 0.0%us, 0.5%sy, 0.0%ni, 59.1%id, 0.0%wa, 0.0%hi, 40.5%si, 0.0%st
Cpu1 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
1) During packet forwarding why is the kernel usage at 0.5% compared to softirq percentage 40.5%? So the effective CPU usage is (40.5 + 0.5)%?
2) Why is the 2nd CPU completely idle?
Thanks!
It means that process context kernel usage is 0.5% i.e. non
IRQ/Softirq usage in kernel is 0.5%. 40.5% is softirq as you clearly
say. Effective CPU usage is as you say
probably because either you
a) have only one hard irq for your
network device and that's tied to core 0 or
b) all IRQs are tied to core 0 even if you have more than one IRQ line/have
multi-queue
c) Your benchmark has a single TCP/UDP stream so the hashing is putting
everything on core 0 even though you have multiple queue/cores.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to know (in a few words) what are the main differences between OpenMP and MPI.
OpenMP is a way to program on shared memory devices. This means that the parallelism occurs where every parallel thread has access to all of your data.
You can think of it as: parallelism can happen during execution of a specific for loop by splitting up the loop among the different threads.
MPI is a way to program on distributed memory devices. This means that the parallelism occurs where every parallel process is working in its own memory space in isolation from the others.
You can think of it as: every bit of code you've written is executed independently by every process. The parallelism occurs because you tell each process exactly which part of the global problem they should be working on based entirely on their process ID.
The way in which you write an OpenMP and MPI program, of course, is also very different.
MPI stands for Message Passing Interface. It is a set of API declarations on message passing (such as send, receive, broadcast, etc.), and what behavior should be expected from the implementations.
The idea of "message passing" is rather abstract. It could mean passing message between local processes or processes distributed across networked hosts, etc. Modern implementations try very hard to be versatile and abstract away the multiple underlying mechanisms (shared memory access, network IO, etc.).
OpenMP is an API which is all about making it (presumably) easier to write shared-memory multi-processing programs. There is no notion of passing messages around. Instead, with a set of standard functions and compiler directives, you write programs that execute local threads in parallel, and you control the behavior of those threads (what resource they should have access to, how they are synchronized, etc.). OpenMP requires the support of the compiler, so you can also look at it as an extension of the supported languages.
And it's not uncommon that an application can use both MPI and OpenMP.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
Id like to encrypt a file and share it. I'd like the file to be decrypted just one time. I was wondering if there are security or encryption protocols that can be used to implement a 1 time use scenario. In simple terms the decryption key would only be good one time.
No it's not possible, with any kind of computer and any kind of OS.
What you want is called a DRM, and your file would need to be read by a program you've coded that would destroy the file (and the decoding key) after reading. But in order to protect the decyphering program from being copied, you'd have to sign the application against your OS, and make your OS protect that file from deletion. And in order to protect your OS from being copied with the file within, you'd have to use a computer that has a chip in the CPU making everything uncopiable... That's called trusted computing.
And though it may theoretically work, it would still be possible to keep a copy of your file, and use a super computer for up to 1000 years (or one hour, depending on your algorithm and the size of your key) to find your decription key, and thus access your precious content.
For the story, sony has tried putting DRMs in their CDs, and that's what they end up creating.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I have a question about downloading with IDM or without it
the question is, we have the same bandwidth and the server sharing same bandwidth
for IDM and simple downlaod manager.
but why we can download faster with IDM? what is the reason?
TNX...
Without a download accelerator, you may not be hitting your and the remote server's bandwidth bottleneck. This means that either or both of you have still more bandwidth that can be tapped.
Download accelerators tap this extra bandwidth in two ways:
By increasing the number of connections to the server, IDM consumes
all or maximum of your bandwidth and increases the proportion of your total internet bandwidth that goes to the download.
The remote server divides it's total bandwidth to the number of connections to it. So, multiple connections to the server ensure that the total bandwidth you're tapping is a sum of those divided bandwidths thus removing another bottleneck.
See http://en.wikipedia.org/wiki/Download_manager#Download_acceleration for more.
Typically a download manager accelerates by making multiple simultaneous connections to the remote server. I believe IDM may actually make multiple requests to the same file at the same time, and thus trick the server into providing higher bandwidth through the multiple connections. Servers are typically bandwidth limited on a per-connection basis, so by making multiple connections, you get higher total bandwidth.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
i really want to know the essential differences between those filesystems, for example the inodes pointer Structure and so on!
Thank you
Ext2 is the first version of the filesystem created in 1993. It is stable and secure and can support volumes up to 4 TB. It doesn't have any form of journaling. It can be used for partitions that doesn't require journaling functions like boot partitions.
Ext3 is more secure and consistent compared to ext2. It has a journaling function that doesn't require a lot of disks access. It is quite slow compared to ext4. It can be used with file with high-variable dimension and server-files.
Ext4 have high performances compared to its predecessors. It uses RAM to optimize read/write operations limiting access time. It is suggested for desktop use but not so recommended for servers (considering its young age).