First, let me dictate the call flow and the nodes involved.
UA1 <--------------> Proxy1 (Kamailio)/RTPProxy1 <-------------------> Asterisk <-------------> Proxy2(Kamailio) /RTPProxy2<---------> UA2
Currently, Asterisk acts as a B2BUA server, and the location lookup/registration is handled by the Proxies. The Asterisk is in the signaling as well as media (audio) path.
Problem Statement:
Asterisk should be in the audio path and not video path if the call is an audio+video call. So, audio goes from UA1 to RTPproxy, Asterisk to RTPProxy to UA2 and back. While video from UA1 to RTPProxy 1 to RTProxy2 to UA2.
Question:
Can Asterisk be configured/programmed, so that it negotiates with RTPProxy1/2 video IP/port? While for Audio it does negotiation with its own IP and Port as its currently doing.
Thanks
Abhijit
No, asterisk video is very limited. Negotiation options are same, so it will work same as audio.
If you want make it different, create TWO calls - one audio call and one video call without audio.
However if you use kamailio as proxy, in theory it is POSSIBLE make it like you want. But very unlikly your UA will support that(at least i never hear about something like that).
Related
How could I proxy a rtmp stream?
I have two raspberry pi streaming live video from raspicams on my LAN. Each raspberry pi sends the video to ffmpeg which wraps in flv and sends to crtmpserver.
A third server using nginx, has a static html page with two instances of jwplayer, each pointing to one raspberry pi.
The setup is just like this one.
The web server uses authentication and I'd like streams not to be public too.
I'm thinking of trying nginx-rtmp-module, but I am not sure if it would help me. Also, it seems dormant and has many open issues.
I'm open to suggestions, thanks in advance!
You can use MonaServer with this client (copy it into the www/ directory of MonaServer) which listen on the udp port 6666 and wait for an flv file to publish it with the name "file".
Then you should already be able to play your stream with jwplayer (with the address rtmp:///file) or with any other player. MonaServer support the HTTP protocol so you can host your html page without nginx if you want.
Now if you want to filter the subscription to "file" you need to write a client:onSubscribe function in your main.lua script, just like this :
function onConnection(client)
INFO("Connection from ",client.address)
function client:onSubscribe(listener)
INFO("Subscribing to ", listener.publication.name, "...")
if not client.right then
error("no rights to play it")
end
end
end
(Here you need to change the "not client.right" and implement your authentication function for your purpose)
Going further you could use another flash video client that support RTMFP in order to handle a large number of clients. Contact me (jammetthomas AT gmail.com) for more informations.
I just sniffed some traffic using wireshark and noticed, that the YouTube traffic relies on TCP. I thought, they were using UDP? But it seems like as if they would use HTTP octet streams. Is YouTube really using TCP for streams or am i missing something?
Because they need everything TCP provides (slow start, transmit pacing, exponential backoff, receive windows, reordering, duplicate rejection, and so on) they would either have to use TCP or try to do all those things themselves. There's no way they could do that better than each operating system's optimized TCP implementation.
Obviously, Google is currently experimenting with own Protocol Implementations, like QUIC (Quick UDP Internet Connection), as one can see when examining the HTTP Response
HTTP/1.1 200 OK
...
Content-Type: video/mp4
Alternate-Protocol: 80:quic
...
However, currently, they seem to rely on TCP, just like David mentioned before.
From http://www.crazyengineers.com/threads/youtube-use-tcp-or-udp.38419/:
...of course youtube page uses http [which is over TCP]. The real thing does not happens
via http page but the flash object that is embedded in that page. The
flash object which appear on youtube is video flash player. The video
flash player acts as iframe(technically incorrect term) for contents
that would be called for streaming via flash object. For storing media
contents a media sever have been installed by youtube whose contents
get called when you press play button.
For streaming media to flash player Real Time Streaming
Protocol(RTSP) is used. The play button on flash player acts as RTSP
invoker for media being called and media is streamed via UDP packets.
In fact you don't need to migrate anywhere from page because the
embedded object calls for video not the http page but as the object is
embedded on http page once you close it, object also get closed.
I am using mochiweb for a server that may also get a TCP connction to which the client sends a simple string (without a newline, the string is not http). Mochiweb uses HTTP sockets and therefore fails to detect this (i dont even get http_error that i can easily get in mochiweb). How can I solve this? Ideally I wish to change mochiweb code to do setopt({packet, http_or_raw}) but this kind of thing does not exist. How would you recommend handling this? my current idea was to modify mochiweb and use erlang:decode_packet, is there a better approach?
EDIT:
More info.
Our server is a websocket service. We wish to allow
people without a ws supporting browser to use it so we use a
flash object to do websocket when the browser can't. The flash object needs to get a flash policy file. Flash
forces the file to be in one of two places:
- port 843 (flash hard coded)
- the port of the ws service
The flash protocol is NOT HTTP based.
Amazon ELB does not allow port forwarding for
most ports below 1024, so we
implemented the flash server in the same port via a patch to
mochiweb (https://github.com/nivertech/mochiweb/tree/ori_flash_170811).
Any advice?
mochiweb isn't designed to handle this use case, if it doesn't look
like HTTP then the connection is closed and it gets discarded. You
would have to go around mochiweb_http for this purpose. I'd suggest
using an alternate port, or making it look like HTTP.
If I really wanted to do what you say you want to do, I would copy
mochiweb_http.erl to some other name (e.g. sometimes_not_http.erl) and
make the appropriate changes to loop/2 and request/2… then instead of
adding mochiweb_http to your supervisor you'd add sometimes_not_http.
It is not necessary or recommended to make modifications to mochiweb
in-place.
I'm trying to write a small program that I can talk to Omegle strangers via command line for school. However I'm having some issues, I'm sure I could solve the problem if I could view the headers sent however if you talk to a stranger on Omegle while Live HTTP Headers (or a similar plug-in or program) is running the headers don't show. Why is this? Are they not sending HTTP headers and using a different protocol instead?
I'm really lost with this, any ideas?
I had success in writing a command line Omegle chat client. However it is hardcoded in C for POSIX and curses.
I'm not sure what exactly your problem is, maybe it's just something with your method of reverse engineering Omegle's protocol. If you want to make a chat client, use a network packet analyzer such as Wireshark (or if you're on a POSIX system I recommend tcpdump), study exactly what data is sent and received during a chat session and have your program emulate what the default web client is doing. Another option is to de-compile/reverse engineer the default web client itself, which would be a more thorough method but more complicated.
I want to monitor the websocket traffic (like to see what version of the protocol the client/server is using) for debugging purposes. How would I go about doing this? Wireshark seems too low level for such a task. Suggestions?
Wireshark sounds like what you want actually. There is very little framing or structure to WebSockets after the handshake (so you want low-level) and even if there was, wireshark would soon (or already) have the ability to parse it and show you the structure.
Personally, I often capture with tcpdump and then parse the data later using wireshark. This is especially nice when you may not be able wireshark on the device where you want to capture the data (i.e. a headless server). For example:
sudo tcpdump -w /tmp/capture_data -s 8192 port 8000
Alternately, if you have control over the WebSockets server (or proxy) you could always print out the send and receive data. Note that since websocket frames start with '\x00' will want to avoid printing that since in many languages '\x00' means the end of the string.
If you're looking for the actual data sent and received, the recent Chrome Canary and Chromium have now WebSocket message frame inspection feature.
You find details in this thread.
I think you should use Wireshark
Steps
Open wireshark
Go to capture and follow bellow path: capture > interfaces > start capture in your appropriate device.
Write rules in filter tcp.dstport == your_websoket_port
Hit apply
For simple thing, wireshark is too complex, i wanted to check only if the connection can be establish or not. Following Chrome plugin "Simple Web-socket (link : https://chrome.google.com/webstore/detail/simple-websocket-client/pfdhoblngboilpfeibdedpjgfnlcodoo?hl=en)" work like charm. See image.
https://lh3.googleusercontent.com/bEHoKg3ijfjaE8-RWTONDBZolc3tP2mLbyWanolCfLmpTHUyYPMSD5I4hKBfi81D2hVpVH_BfQ=w640-h400-e365