Throughout the CoreFoundation framework source, POSIX filesystem API calls (e.g. open(), stat(), et al…) are wrapped in an idiom wherein a descriptor on /dev/autofs_nowait is acquired – with open(…, 0) – before the POSIX calls are made; afterwards the descriptor is close()’d before the scope exits.
What is the benefit of doing this? What are the risks?
Does acquiring the /dev/autofs_nowait descriptor have any affect on, or is it effected by, flags to any thusly-wrapped open() calls (like e.g. O_NONBLOCK)?
/dev on my machine, running OS X 10.10.5 has other “autofs” entries:
… none of which have man pages available. If these file-like devices might offer benefits in this vein I would be interested to hear about their use as well, as it may pertain.
Addendum: I could not find much on this subject; a Google Plus post from 2011 claims that:
[t]his file is a special device that's monitored by the autofs
filesystem implementation in the kernel. When opened, the autofs
filesystem will not block that process on any I/O operations on an
autofs file system.
I am not quite sure what that means (they were specifically talking about how launchd works, FWIW) but I was curious about this myself, so I wrote a quick context-manager-y RAII struct to try it out – untargeted profiling shows tests with POSIX calls completing faster but within general hashmarks; I’ll investigate this tactic with a finer-toothed comb after I get more background on how it all works.
These devices allowed the implementor(s) to avoid to define a new syscall or ioctl for the functionality, and maybe it was implemented that way because it was simpler, requires updating less code, and does not change the VFS API, which may have been the concerns at the time.
When you open /dev/autofs_nowait and traverse a path, you trigger auto-mounts, but don't wait for them to finish (otherwise your process blocks until the filesystem is mounted or after the operation times out), so you may receive a ENOENT when opening a file even if everything goes fine.
OTOH, /dev/autofs_notrigger makes the process not even trigger the auto-mounting.
That is all those devices do. The thing is that, in Darwin's implementation, open may block when traversing the filesystem even with O_NONBLOCK or O_NDELAY.
You can follow the flow from the vfs, from the open operation of a vnode:
vn_open -> vn_open_auth ->
namei -> lookup -> ...
Down that path there's no handling of the (non)blocking behavior.
Related
I have some questions on performing File I/Os using MPI.
A set of files are distributed across different processes.
I want the processes to read the files in the other processes.
For example, in one-sided communication, each process sets a window visible to other processors. I need the exactly same functionality. (Create 'windows' for all files and share them so that any process can read any file from any offset)
Is it possible in MPI? I read lots of documentations about MPI, but couldn't find the exact one.
The simple answer is that you can't do that automatically with MPI.
You can convince yourself by seeing that MPI_File_open() is a collective call taking an intra-communicator as first argument and returning a file handler to the opened file as last argument. In this communicator, all processes open the file and therefore, all processes must see the file. So unless a process sees a file, it cannot get a MPI_file handler to access it.
Now, that doesn't mean there's no solution. A possibility could be to do by hand exactly what you described, namely:
Each MPI process opens individually the file they see and are responsible of; then
Each of theses processes reads this local file into a buffer;
Theses individual buffers are all exposed, using either a global MPI_Win memory windows, or several individual ones, ready for one-sided read accesses; and finally
All read accesses to any data that were previously stored in these individual local files, are now done through MPI_Get() calls using the memory window(s).
The true limitation of this approach is that it requires to fully read all of the individual files, therefore, you need to have sufficient memory per node for storing each of them. I'm well aware that this is a very very big caveat that could just make the solution completely impractical. However, if the memory is sufficient, this is an easy approach.
Another even simpler solution would be to store the files into a shared file system, or having them all copied on all local file systems. I imagine this isn't an option since the question wouldn't have been asked otherwise...
Finally, in last resort, a possibility I see would be to dedicate a MPI process (or an OpenMP thread of a MPI process) per node to serve each files. This process would just act as a "file server", answering "read" request coming from the other MPI processes, and serving them by reading the requested data from the file, and sending it back via MPI. It's a bit lengthy to write, but it should work.
What's the UNIX command to see the processes table, remember that table contains:
process status
pointers
process size
user ids
process ids
event descriptors
priority
etc
The "process table" as such lives in the kernel's memory. Some systems (such as AIX, Solaris and Linux--which is not "unix") have a /proc filesystem which makes those tables visible to ordinary programs. Without that, programs such as ps (on very old systems such as SunOS 4) required elevated privileges to read the /dev/kmem (kernel memory) special device, as well as having detailed knowledge about the kernel memory layout.
Your question is open ended, and an answer to a specific question you may have had can be looked up in any man page as #Alfasin suggests in his answer. A lot depends on what you are trying to do.
As #ThomasDickey points out in his response, in UNIX and most of its' derivatives, the command for viewing processes being run in the background or foreground is in fact the ps command.
ps stands for 'process status', answering your first bullet item. But the command uses over 30 options and depending on what information you seek, and permissions granted to you by the systems administrator, you can get various types of information from the command.
For example, for the second bullet item on your list above, depending on what you are looking for, you can get information on 3 different types of pointers - the session pointer (with option 'sess'), the terminal session pointer (tsess), and the process pointer (uprocp).
The rest of your items that you have listed are mostly available as standard output of the command.
Some UNIX variants implement a view of the system process table inside of the file system to support the running of programs such as ps. This is normally mounted on /proc (see #ThomasDickey response above)
Typical reasons for understanding the working of the command include system-administration responsibilities such as tracking the origin of the initiated processes, killing runaway or orphaned processes, examining the file size of the process and setting limits where necessary, etc. UNIX developers can also use it in conjunction with ipc features, etc. An understanding of the process table and status will help with associated UNIX features such as the kvm interface to examine crash dump, etc. or to get or set the kernal state.
Hope this helps
As someone who is very new to the opensource PBX projects like Asterisk and FreeSWITCH, I am grappling with some information overload. Have read the basic FreeSWITCH docs on Wiki, but still have few questions. Since I am not very familiar with the terminology, I will try to use close approximations.
Trying to create a small/minimalistic build of FreeSWITCH, that needs to run on an rather old laptop (Celeron 1GHz, 512MB RAM, 20GB HDD, already running Debian "Wheezy"), and set it up as a 6-port GSM-SIP/Jabber gateway. So, by "small" and "minimalistic", I mean one which doesn't have modules/optional-software that is not absolutely necessary (e.g. no need for IVR announcements, or Skype integration) -- to keep memory footprint smallest, and occupy less hard-disk real-estate.
The rough idea is to have 6 GSM ports (via 'GSM-open module', similar to chan_dongle) towards public telephony network, and about 60 SIP extension, and support upto 6 calls involving GSM ports, and about 6 SIP-SIP calls (intra PBX), on this setup. I have read that the CPU overhead of GSMopen module is pretty low, so I am guessing this is possible.
Can someone confirm this to be a realistic goal?
What might be the minimum set of modules to select for minimalistic build?
For modules not chosen during initial build, can those be added later? If so, would it require me to rebuild FreeSWITCH completely, only the modules, or that everything would be built, but only configuration changes would be required to ensure that modules are loaded, and configure?
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?
Can someone confirm this to be a realistic goal?
Yes, this is quite realistic. You need to target as little as possible transcoding, because that's where CPU resources are needed. But even with a 1Ghz Celeron, 6 transcoded sessions seem quite realistic. But it needs testing :)
What might be the minimum set of modules to select for minimalistic build?
Just start with the default list of modules, and add gsmopen (I have no experience with gsm gateways, can't help with that part). The memory footprint is pretty low, and you may need some of those modules later.
For modules not chosen during initial build, can those be added later?
as far as I remember, Wiki describes this process. You edit modules.conf and make the specific module.
Is there any rough estimate of what might be the maximum call-rate that could be supported in such a configuration? For SIP-SIP calls? Given the underpowered processor, and little RAM (as per modern standards), I am guessing that both shall be bottlenecks, but adding RAM might still be possible (even if costly and difficult).
It really depends on complexity of your dialplan. Each context consists of a number of conditions, which are doing regexp match on channel variables. So, the more complex your dialplan is, the less CPS you get. But for a 6-channel gateway, I don't see this a problem. GSM network will be much slower than your box :)
I have read that "hooks" can be created using Lua/Python/Java etc.. However if someone share share few examples of what-all is possible using such hooks, it would make the concept clearer. Can one hope to write an application like "missed call log" or "redirect on no answer" using these hooks?
You can control every aspect of FreeSWITCH behavior with FreeSWITCH. There are even examples when the complete dialplan is re-implemented by an external program (Kazoo does that).
The simplest mode of operation is when your Lua/JS/Perl/Python script is launched from within the dialplan: then it receives a "session" object, and you can do whatever you want with the call: play sounds, bridge, forward, make a new call and bridge them together, and so on. Here in my blog there's a little practical example.
Then, you can build an external application which connects to the FS socket and monitors the events and performs actions on active calls.
Also, it can be done in the opposite direction: you run a server, and FS connects to it with its socket library.
Also, you can have an HTTP service which delivers pieces of XML configuration to FreeSWITCH, and it requests those on every call (this would be the most CPU-intensive application). This way, you can feed FS from some internal database, and build fault-tolerant systems.
I hope this helps :)
You can also find me in skype if needed.
FreeSWITCH is not really memory-hungry, and you can simply start with the default set of modules (the best is to use the prebuilt Debian packages). For example, on my 64bit machine, the FreeSWIITH process occupies only 35MB of memory.
freeswitch#vx03:~$ uname -a
Linux vx03 2.6.32-5-xen-amd64 #1 SMP Thu Nov 3 05:42:31 UTC 2011 x86_64 GNU/Linux
freeswitch#vx03:~$ ps -p 11873 v
PID TTY STAT TIME MAJFL TRS DRS RSS %MEM COMMAND
11873 ? S<l 10:29 0 0 258136 36852 2.3 /opt/freeswitch/bin/freeswitch -nc -rp -nonat -u freeswitch -g freeswitch
I will go through the rest of your questions later today
My situation is quite simple: I want to run a MPI-enabled software on a single multiprocessor/core machine, let's say 8.
My implementation of MPI is MPICH2.
As I understand I have a few options:
$ mpiexec -n 8 my_software
$ mpiexec -n 8 -hosts {localhost:8} my_software
or I could also specify Hydra to "fork" and not "ssh";
$ mpiexec -n 8 -launcher fork my_software
Could you tell me if there will be any differences or if the behavior will be the same ?
Of course as all my nodes will be on the same machine I don't want "message passing" to be done through the network (even the local loop) but through shared memory. As I understood MPI will figure that out itself and that will be the case for all the three options.
Simple answer:
All methods should lead to the same performance. You'll have 8 processes running on the cores and using shared memory.
Technical answer:
"fork" has the advantage of compatibility, on systems where rsh/ssh process spawning would be a problem. But can, I guess, only start processes locally.
At the end (unless MPI is weirdly configured) all processes on the same CPU will end up using "shared memory", and the launcher or the host specification method should not matter for this. The communication method is handled by another parameter (-channel ?).
Specific syntax of host specification method can permit to bind processes to a specific CPU core, then you might have slightly better/worse performance depending of your application.
If you've got everything set up correctly then I don't see that your program's behaviour will depend on how you launch it, unless that is it fails to launch under one or other of the options. (Which would mean that you didn't have everything set up correctly in the first place.)
If memory serves me well the way in which message passing is implemented depends on the MPI device(s) you use. It used to be that you would use the mpi ch_shmem device. This managed the passing of messages between processes but it did use buffer space and messages were sent to and from this space. So message passing was done, but at memory bus speed.
I write in the past tense because it's a while since I was that close to the hardware that I knew (or, frankly, cared) about low-level implementation details and more modern MPI installations might be a bit, or a lot, more sophisticated. I'll be surprised, and pleased, to learn that any modern MPI installation does, in fact, replace message-passing with shared memory read/write on a multicore/multiprocessor machine. I'll be surprised because it would require translating message-passing into shared memory access and I'm not sure that that is easy (or easy enough to be feasible) for the whole of MPI. I think it's far more likely that current implementations still rely on message-passing across the memory bus through some buffer area. But, as I state, that's only my best guess and I'm often wrong on these matters.
The POSIX standard defines several routines for thread synchronization, based on concepts like mutexes and conditional variables.
my question is now: are these (like e.g. pthreads_cond_init(), pthreads_mutex_init(), pthreads_mutex_lock()... and so on) system calls or just library calls? i know they are included via "pthread.h", but do they finally result in a system call and therefore are implemented in the kernel of the operating system?
On Linux a pthread mutex makes a "futex" system call, but only if the lock is contended. That means that taking a lock no other thread wants is almost free.
In a similar way, sending a condition signal is only expensive when there is someone waiting for it.
So I believe that your answer is that pthread functions are library calls that sometimes result in a system call.
Whenever possible, the library avoids trapping into the kernel for performance reasons. If you already have some code that uses these calls you may want to take a look at the output from running your program with strace to better understand how often it is actually making system calls.
I never looked into all those library call , but as far as I understand they all involve kernel operations as they are supposed to provide synchronisations between process and/or threads at global level - I mean at the OS level.
The kernel need to maintain for a mutex, for instance, a thread list: threads that are currently sleeping, waiting that a locked mutex get released. When the thread that currently lock/owns that mutex invokes the kernel with pthread_mutex_release(), the kernel system call will browse that aforementioned list to get the higher priority thread that is waiting for the mutex release, flag the new mutex owner into the mutex kernel structure, and then will give away the cpu (aka "ontect switch") to the newly owner thread, thus this process will return from the posix library call pthread_mutex_lock().
I only see a cooperation with the kernel when it involves IPC between processes (I am not talking between threads at a single process level). Therefore I expect those library call to invoke the kernel, so.
When you compile a program on Linux that uses pthreads, you have to add -lphtread to the compiler options. by doing this, you tell the linker to link libpthreads. So, on linux, they are calls to a library.