Can docker containers share a unix abstract socket like the ones for DBUS?
If it can be done, how do you do it?
If it cannot or cannot yet, is there a way to share a dbus connection among the host and containers, or between containers?
Here is the answer from another site :
DBus uses abstract sockets , which are network-namespace specific.
So the only real way to fix this is to not use a network namespace
(i.e. docker run --net=host). Alternatively, you can run a process on
the host which proxies access to the socket. I think that's what
xdg-app does basically (also for security reasons to act as a filter).
There might be some other way, but that's all I can think of offhand.
http://ask.projectatomic.io/en/question/3647/how-to-connect-to-session-dbus-from-a-docker-container/
Related
I want to run an MPI job on my Kubernetes cluster. The context is that I'm actually running a modern, nicely containerised app but part of the workload is a legacy MPI job which isn't going to be re-written anytime soon, and I'd like to fit it into a kubernetes "worldview" as much as possible.
One initial question: has anyone had any success in running MPI jobs on a kube cluster? I've seen Christian Kniep's work in getting MPI jobs to run in docker containers, but he's going down the docker swarm path (with peer discovery using consul running in each container) and I want to stick to kubernetes (which already knows the info of all the peers) and inject this information into the container from the outside. I do have full control over all the parts of the application, e.g. I can choose which MPI implementation to use.
I have a couple of ideas about how to proceed:
fat containers containing slurm and the application code -> populate
the slurm.conf with appropriate info about the peers at container
startup -> use srun as the container entrypoint to start the jobs
slimmer containers with only OpenMPI (no slurm) -> populate a
rankfile in the container with info from outside (provided by
kubernetes) -> use mpirun as the container entrypoint
an even slimmer approach, where I basically "fake" the MPI runtime by
setting a few environment variables (e.g. the OpenMPI ORTE ones) ->
run the mpicc'd binary directly (where it'll find out about its peers
through the env vars)
some other option
give up in despair
I know trying to mix "established" workflows like MPI with the "new hotness" of kubernetes and containers is a bit of an impedance mismatch, but I'm just looking for pointers/gotchas before I go too far down the wrong path. If nothing exists I'm happy to hack on some stuff and push it back upstream.
I tried MPI Jobs on Kubernetes for a few days and solved it by using dnsPolicy:None and dnsConfig (CustomDNS=true feature gate will be needed).
I pushed my manifests (as Helm chart) here.
https://github.com/everpeace/kube-openmpi
I hope it would help.
Assuming you don't want to use hw-specific MPI library (for example anything that uses direct access to communication fabric), I would go with option 2.
First, implement a wrapper for mpirun which populates necessary data
using kubernetes API, specifically using endpoints if using a
service (might be a good idea), could also scrape pod's exposed
ports directly.
Add some form of checkpoint program that can be used for
"rendezvous" synchronization before starting actual running code (I
don't know how well MPI deals with ephemeral nodes). This is to
ensure that when mpirun starts it has stable set of pods to use
And finally actually build a container with necessary code and I
guess SSH service for mpirun to use for starting processes in
other pods.
Another interesting option would be to use Stateful Sets, possibly even running with SLURM inside, which implement a "virtual" cluster of MPI machines running on kubernetes.
This provides stable hostnames for each node, which would reduce the problem of discovery and keeping track of state. You could also use statefully-assigned storage for container's local work filesystem (which, with some work, could be made to for example always refer to same local SSD).
Another benefit is that it would be probably least invasive to the actual application.
I'm trying to create something like this:
The server containers each have port 8080 exposed, and accept requests from the client, but crucially, they are not allowed to communicate with each other.
The problem here is that the server containers are launched after the client container, so I can't pass container link flags to the client like I used to, since the containers it's supposed to link to don't exist yet.
I've been looking at the newer Docker networking stuff, but I can't use a bridge because I don't want server cross-communication to be possible. It also seems to me like one bridge per server doesn't scale well, and would be difficult to manage within the client container.
Is there some kind of switch-like docker construct that can do this?
It seems like you will need to create multiple bridge networks, one per container. To simplify that, you may want to use docker-compose to specify how the networks and containers should be provisioned, and have the docker-compose tool wire it all up correctly.
Resources:
https://docs.docker.com/engine/userguide/networking/dockernetworks/
https://docs.docker.com/compose/
https://docs.docker.com/compose/compose-file/#version-2
One more side note: I think that exposed ports are accessible to all networks. If that's right, you may be able to set all of the server networking to none and rely on the exposed ports to reach the servers.
Hope this is relevant to your use-case - I'm attempting to draw context regards your actual application from the diagram and comments. I'd recommend you go the Service Discovery route. It may involve a little bit of simple API over a central store (say Redis, or SkyDNS), but would make things simple in the long run.
Kubernetes, for instance, uses SkyDNS to do so with DNS. At the end of the day, any orchestration tool of your choice would most likely do something like this out of the box: https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns
The idea is simple:
Use a DNS container that keeps entries of newly spawned servers
Allow the Client Container to query it for a list of servers. e.g. Picture a DNS response with a bunch of server-<<ISO Timestamp of Server Creation>>s
Disallow client containers read-access to this DNS (how to manage this permission-configuration without indirection, i.e. without proxying through an endpoint that allows writing into the DNS Container, but not reading, is going to exotic)
Bonus Edit: I just realised you can use a simpler Redis-like setup to do this, and that DNS might just be overengineering :)
I'm writing a bit of desktop software which has two components. Component B queries component A. Creating a web service seems like an ideal way to do IPC in principle. The data model fits, there are ready-made client and server libraries, a well known way to encode and decode parameters etc.
But setting up an HTTP server on a network socket doesn't seem right for a local application. For example what port do I choose? I don't really want people to be able to scan and talk to the app from outside etc.
So I was thinking that I might be able to do HTTP over a domain socket. Does that make any sense? Is there any precedence for it? Is there an equivalent protocol that I could use for IPC which has the same properties as HTTP (requests for specified resources (URIs), encoded parameters, response)?
Looking for C libraries (and possibly Go and ObjC for bonus points).
Binding to the loopback interface only (127.0.0.1) solves your "external visibility" problem, only processes on the local machine will be able to connect.
It does not solve your port allocation problem though, the port number you choose might be taken by the time your app starts. Then your server can't bind and your client connects to the other process bound to your port.
Old, less hip, but CORBA implementations tend to have the problems you have not thought of yet figured out already.
Is it possible to connect to a Air app running on another machine via socket(assuming we know ip) or some other mechanism(which doesnt use Cirrus/stratus)? If it is can someone please help me on how?
Let me rephrase question, I dont want to connect to a server over socket. I would like to know if it is possible to connect from one AIR app on machine A to connect to another AIR app on machine B via sockets without cirrus. I'm not asking for someone else to do my work, I couldnt find any documentation or possibility of the above thing. My conclusion now is that it is not possible, but I would just like it to be verified by other people(experts).
Absolutely, as3 supports sockets. http://www.ultrashock.com/forum/viewthread/81676/
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/Socket.html?filter_flash=cs5&filter_flashplayer=10.2&filter_air=2.6
There are two ways to do it. One AIR app can act as a server by creating a ServerSocket object while the other app connects to this with the Socket class. The other way is to use the DatagramSocket class.
In both cases, the trick is that because of network access translation, the IP address to use is not always easily discoverable unless at least one of the computers has a static IP. If both computers are on the same network subnet you can look up the IP address needed to reach one computer from the other manually. Otherwise, the IP one computer must use to reach the other won't be the same IP that the computer sees for itself. This matchmaking is the service that stratus/cirrus provides.
See http://www.brynosaurus.com/pub/net/p2pnat/ for a description of the problem.
Im trying to understand the virtual terminal access. I was wondering if anyone know any sources for the Virtual Terminal Access protocol. And other sources like ftp, http, and remote procedure calls.
The RFC Sourcebook...
I'm not sure about the 'Virtual Terminal Access Protocol' though. That's a new one to me. Usually, if you're looking to communicate with a terminal you have to know the model of the specific terminal because there are so many different terminal specifications.
The RFC sourcebook, at the least, will give you a great resource to help implement FTP, HTTP,and RPC.
If you want to see a great example of a virtual terminal check out PuTTY
I suspect you meant the Virtual Terminal Protocol that was part of the ISO protocol stack. It was never widely deployed. The logical Internet equivalent was telnet, which, while extremely useful in its day, was insecure and has since been replaced by ssh.