what is the difference between
G.729
G.729A
G.729AB
and if I have codecs set up for G729 in asterisk does this mean that G729A and G.729AB will work?
thanks
Asterisk support g729a only. So you have no other options.
G729b is not compatible with g729a, g729ab mean both variant supported by switch.
G.729a is a compatible extension of G.729 that needs less CPU because it has lower speech quality. It is compatible with G729.
Some of its features are:
Sampling frequency 8 kHz/16-bit (80 samples for 10 ms frames)
Fixed bit rate (8 kbit/s 10 ms frames)
Fixed frame size (10 bytes for 10 ms frame)
Algorithmic delay is 15 ms per frame, with 5 ms look-ahead delay
G.729b:
Not compatible with G.729 or G.729a
Has a silence compression method that enables the voice activity detection module (VAD). It is used to detect voice activity in the signal.
It includes a discontinuous transmission (DTX) module which decides on updating the background noise parameters for non speech (noisy frames).
Uses 2-byte Silence Insertion Descriptor (SID) frames transmitted to initiate comfort noise generation (CNG). If transmission is stopped, and the link goes quiet because of no speech, the receiving side might assume that the link has been cut. By inserting comfort noise, analog hiss is simulated digitally during silence to assure the receiver that the link is active and operational.
Read more at its wikipedia article
Related
Looking at:
OMNET++: How to obtain wireless signal power?
and
https://github.com/inet-framework/inet/blob/master/examples/wireless/scaling/omnetpp.ini
there seem to be no power consumption related settings to packets that are sent in a UnitDiskRadio.
Is there a way of setting packet power consumption in a unit disk radio medium, or, conversely, communication range in ApskScalarRadioMedium?
UnitDiskRadio is a simplified version of a radio, where you are not interested in the transmission, propagation, attenuation etc. details. You just want to have a clear cut transmission distance. Above that, the transmission always fails, below that the transmission always succeed. This is simple, fast and suitable if you want to simulate high level behavior like application level or routing. You really don't care how much your radio draws from a power grid (or battery) in this case.
On the other hand, if you are interested in low level details, the whole radio transmission process should be modeled. In this case, you model the power draw and based on that transmission and there is no clear cut transmission range. Whether a transmission succeeds is a probabilistic outcome depending on power, antenna configuration, encoding, modulation, noise and a lot of other stuff, so you cannot set it as a simple "range".
TLDR: No, you cannot set both of them on the same radio.
PS: and make sure that you do not mix and match various power parameters. The first question you linked is about getting the power of a received packet (i.e. how strong that signal was when it was received). The second link show how to configure the transmission power (that goes out on the antenna), and in the question you are referring to power consumption which is a third thing, meaning how much you draw from a battery to make the transmission. They are NOT the same thing.
I'm a little confused about the inter-character gap in Modbus and whether its required when a master sends a message to a slave. The protocol spec says you can't have more than a 3.5 character gap between bytes when transmitting but is there any specific minimum amount of time you must have between bytes?
I've written a Modbus driver (master) that is able to communicate with a variety of devices and most don't seem to care about any gap between characters when receiving messages. However, I've come across a couple of devices where I was unable to communicate reliably without putting in a some kind of delay (measured in microseconds) between bytes, which is determined by the baud rate.
Is the character gap an absolute requirement or does it depend on the manufacture of the device and how the implement the Modbus protocol?
Does Modbus RTU require a gap between characters when transmitting?
No.
In fact the Modbus spec states in section 2.5.1.1 MODBUS Message RTU Framing that "[t]he entire message frame must be transmitted as a continuous stream of characters."
Requiring intercharacter gaps would be contradictory to specifying "a continuous stream".
The protocol spec says you can't have more than a 3.5 character gap between bytes when transmitting ...
You're misquoting the protocol spec.
Only a 1.5 character gap is tolerated between characters in a RTU message.
From the Modbus spec: "If a silent interval of more than 1.5 character times occurs between two characters, the message frame is declared incomplete and should be discarded by the receiver."
A silent (idle) line with a duration of 3.5 characters must precede a message.
IOW a gap of 2 (i.e. more than 1.5 and less than 3.5) characters would prematurely end the current message, and the following characters (of that malformed message) would not be considered the start of a new message and must be discarded (until the line goes idle for at least 3.5 characters).
... is there any specific minimum amount of time you must have between bytes?
The Modbus spec does not mention any such requirement.
Such a requirement would be impractical.
UARTs (typically) do not have a capability to meter its output by inserting a delay between the transmission of character frames.
Adding such a delay is an additional processor burden as well as the use of a timer.
On the contrary UARTs have evolved to transmit characters as fast as the baudrate allows with the least processor intervention, e.g. hardware FIFO and DMA.
A "minimum amount of time you must have between bytes" is simply a reduction in the effective data rate.
Therefore an appropriate reduction of the baudrate would accomplish the exact same data rate.
Is the character gap an absolute requirement or does it depend on the manufacture of the device and how the implement the Modbus protocol?
No, you are probably using too fast a baudrate for the device/circumstances in question.
A microprocessor or microcontroller should be able to keep a UART busy and transmit without any intercharacter gaps.
A UART that requires gaps during receiving is IMO an overloaded system and is broken.
For reliable communication (without flow control) use a baudrate low enough so that metering the transmitted characters is not necessary.
Addendum
Apparently there is at least one UART (from Siemens) that can meter its output by holding the line idle for N bit-times between character frames.
It is at the end of the message that there should be a pause of 3.5 characters or longer.
Usually in the data transmission protocols in the first positions of the byte sequence, the number of bytes that follow is included, but Modbus RTU does not send that length and what determines when a message ends is the pause of 3.5 characters.
If you send the sequence of bytes at once there should never be any pause between characters
If you are writing a Master you should not worry about this, since it is the slave that must wait 3.5 characters to know when the master request is finished.
You from the master side simply wait for the slave to reply since you know how many bytes the slave is going to send, in the request you already sent how many bits or 16bit words you want to read.
And if you have communication problems with some devices, it is probably due to the combination of communication speed and poor quality of the line. Try a lower baud rate, but adding wait between characters for me doesn't make much sense.
I have encountered uarts which require 2 stop bits on receive. If it set for 2 stop bits, this may explain the reqirement for a gap to extend thes# stop period beyond 2 stop bits.
Usually, only the first stop bit is checked on receive to determine framing error regardless of the stop bit setting.
If ADPCM can store 16-bit per sample audio into 4-bit per sample, is there a way to store 8-bit per sample audio into 2-bit per sample?
The G.726 standard supercedes G.721 and G.723 into a single standard, and adds 2-bit ADPCM to the 3- 4- and 5-bit modes from older standards. These are all very simple and fast to encode/decode. There appears to be no file format for the 2-bit version, but there is a widely re-used open source Sun library to encode/decode the formats; SpanDSP is just one library that includes the Sun code. These take 16-bit samples as input but it is trivial to convert 8-bit to 16-bit.
If you want to hear the 2-bit mode you may have to write your own converter that calls into the library.
There's also ADPCM specifications from long ago like "ADPCM Creative Technology" that support low bit rates and sample sizes.
See also the Sox documentation about various old compression schemes.
The number of bits per sample is not strictly related to the dynamic range or number of bits in the output. For example, the https://en.wikipedia.org/wiki/Direct_Stream_Digital format used in Super Audio CD achieves high quality with only 1 bit per sample, but at a 2.8224 MHz sample rate.
As far I know, ADPCM compression standard needs 4-bits per sample even if the original uncompressed audio has 8-bit audio samples. Hence there is NO way to encode audio using 2-bit per sample with ADPCM.
EDIT: I am specifically referring to G.726, which is one of the widely supported speech compression standard in WAV. Personally, I am not aware of freely available G.727 codec. FFMPEG is one of the libraries with extensive support for audio codecs. You can see the audio codec list supported by them at https://www.ffmpeg.org/general.html#Audio-Codecs. In the list, I do see support for other ADPCM formats, which may be worth exploring.
Currently I have a GStreamer stream being sent over a wireless network. I have a hardware encoder that coverts raw, uncompressed video into a MPEG2 Transport Stream with h.264 encoding. From there, I pass the data to a GStreamer pipeline that sends the stream out over RTP. Everything works and I'm seeing video, however I was wondering if there was a way to limit the effects of packet loss by tuning certain parameters on the encoder.
The two main parameters I'm looking at are the GOP Size and the I frame rate. Both are summarized in the documentation for the encoder (a Sensoray 2253) as follows:
V4L2_CID_MPEG_VIDEO_GOP_SIZE:
Integer range 0 to 30. The default setting of 0 means to use the codec default
GOP size. Capture only.
V4L2_CID_MPEG_VIDEO_H264_I_PERIOD:
Integer range 0 to 100. Only for H.264 encoding. Default setting of 0 will
encode first frame as IDR only, otherwise encode IDR at first frame of
every Nth GOP.
Basically, I'm trying to give the decoder as good of a chance as possible to create a smooth video playback, even given the fact that the network may drop packets. Will increasing the I frame rate do this? Namely, since the I frame doesn't have data relative to previous or future packets, will sending the "full" image help? What would be the "ideal" setting for the two above parameters given the fact that the data is being sent across a lossy network? Note that I can accept a slight (~10%) increase in bandwidth if it means the video is smoother than it is now.
I also understand that this is highly decoder dependent, so for the sake of argument let's say that my main decoder on the client side is VLC.
Thanks in advance for all the help.
Increasing the number of I-Frames will help the decoder recover quicker. You may also want to look at limiting the bandwidth of the stream since its going to be more likely to get the data through. You'll need to watch the data size though because your video quality can suffer greatly since I-Frames are considerably larger than P or B frames and the encoder will continue to target the specified bitrate.
If you had some control over playback (even locally capturing the stream and retransmitting to VLC) you could add FEC which would correct lost packets.
how many packets per second if we use
Speex Codec - 16kHz - H.323,SIP
in Ekiga Softphone?
and how to calculate it?
In Speex, the frame size is always 20ms, so to minimize delay, it should/will always generate 50 packets per second. Only the packet sizes will vary.
The codec in itself doesn't forbid merging multiple frames inside one packet, but that will raise the delay considerably and would probably not be a good thing for a soft phone to do.
All the details you need and more are also available in the Speex RTP profile docs.