Trimming 0x00 in received DatagramPacket - trim

In my Java app I receive DatagramPacket(s) through DatagramSocket. I know the maximum number of bytes which packet could contain, but actually every packet length varies (not exceeding max length).
Let's assume that MAX_PACKET_LENGTH = 1024 (bytes). So every time DatagramPacket is received it is 1024 bytes long, but not always all bytes contains information. It may happen that packet has 10 bytes of useful data, and the rest of 1014 bytes is filled with 0x00.
I am wondering if there is any elegant way to trim this 0x00 (unused) bytes in order to pass to the other layer only useful data? (maybe some java native methods? going in a loop and analysing what packet contains is not desired solution :))
Thanks for all hints.
Piotr

You can call getLength on the DatagramPacket to return the ACTUAL length of the packet, which may be less than MAX_PACKET_LENGTH.

This is too late for the question but, could be useful for person like me.
private int bufferSize = 1024;
/**
* Sending the reply.
*/
public void sendReply() {
DatagramPacket qPacket = null;
DatagramPacket reply = null;
int port = 8002;
try {
// Send reply.
if (qPacket != null) {
byte[] tmpBuffer = new byte[bufferSize];
System.arraycopy(buffer, 0, tmpBuffer, 0, bufferSize);
reply = new DatagramPacket(tmpBuffer, tmpBuffer.length,
qPacket.getAddress(), port);
socket.send(reply);
}
} catch (Exception e) {
logger.error(" Could not able to recieve packet."
+ e.fillInStackTrace());
}
}
/**
* Receives the UDP ping request.
*/
public void recievePacket() {
DatagramPacket dPacket = null;
byte[] buf = new byte[bufferSize];
try {
while (true) {
dPacket = new DatagramPacket(buf, buf.length);
socket.receive(dPacket);
// This is a global variable.
bufferSize = dPacket.getLength();
sendReply();
}
} catch (Exception e) {
logger.error(" Could not able to recieve packet." + e.fillInStackTrace());
} finally {
if (socket != null)
socket.close();
}
}

Related

Serial Connection (Arduino --> Java)

this will be my first post and I will do my best to be clear and concise. I've checked some of the other posts on this forum but was unable to find a satisfactory answer.
My question pertains to the use of JavaFX and the jSSC(java simple serial connection) library. I've designed a very simple GUI application that will host four different charts. Two of the charts will display readings from temperature and solar sensors for the past hour, while the other two display that data over an extended period -- a 14-hour period. Eventually I would like to make that more flexible and set the application to "sleep" when the readings become roughly zero (night).
How can I stream data to display this data in real time?
After referencing several sources online and from "JavaFX 8 Intro. by Example", I've been able to construct most of the serial connection class. I'm having trouble processing the data readings, so that it can be displayed on the chart.
public class SerialComm implements SerialPortEventListener {
Date time = new Date();
SimpleDateFormat sdf = new SimpleDateFormat("mm");
boolean connected;
StringBuilder sb;
private SerialPort serialPort;
final StringProperty line = new SimpleStringProperty("");
//Not sure this is necessary
private static final String [] PORT_NAMES = {
"/dev/tty.usbmodem1411", // Mac OS X
"COM11", // Windows
};
//Baud rate of communication transfer with serial device
public static final int DATA_RATE = 9600;
//Create a connection with the serial device
public boolean connect() {
String [] ports = SerialPortList.getPortNames();
//First, Find an instance of serial port as set in PORT_NAMES.
for (String port : ports) {
System.out.print("Ports: " + port);
serialPort = new SerialPort(port);
}
if (serialPort == null) {
System.out.println("Could not find device.");
return false;
}
//Operation to perform is port is found
try {
// open serial port
if(serialPort.openPort()) {
System.out.println("Connected");
// set port parameters
serialPort.setParams(DATA_RATE,
SerialPort.DATABITS_8,
SerialPort.STOPBITS_1,
SerialPort.PARITY_NONE);
serialPort.setEventsMask(SerialPort.MASK_RXCHAR);
serialPort.addEventListener(event -> {
if(event.isRXCHAR()) {
try {
sb.append(serialPort.readString(event.getEventValue()));
String str = sb.toString();
if(str.endsWith("\r\n")) {
line.set(Long.toString(time.getTime()).concat(":").concat(
str.substring(0, str.indexOf("\r\n"))));
System.out.println("line" + line);
sb = new StringBuilder();
}
} catch (SerialPortException ex) {
Logger.getLogger(SerialComm.class.getName()).log(Level.SEVERE, null, ex); }
}
});
}
} catch (Exception e) {
System.out.println("ErrOr");
e.printStackTrace();
System.err.println(e.toString());
}
return serialPort != null;
}
#Override
public void serialEvent(SerialPortEvent spe) {
throw new UnsupportedOperationException("Not supported yet.");
}
public StringProperty getLine() {
return line;
}
}
Within the try block, I understand the port parameters, but the eventListener is where I am having difficulty. The significance of the stringbuilder is to append data the new data as it is read from the device.
How will I account for the two sensor readings? Would I do that by creating separate data rates to differentiate between the incoming data from each sensor??
I hope that this is clear and that I've provided enough information but not too much. Thank you for any assistance.
-------------------------------UPDATE--------------------------
Since your reply Jose, I've started to make the additions to my code. Adding the listener within the JavaFX class, I'm running into some issues. I keep getting a NullPointerException, which I believe is the String[]data not being initialized by any data from the SerialCommunication class.
serialPort.addEventListener(event -> {
if(event.isRXCHAR()) {
try {
sb.append(serialPort.readString(event.getEventValue()));
String str = sb.toString();
if(str.endsWith("\r\n")) {
line.set(Long.toString(time.getTime()).concat(":").concat(
str.substring(0, str.indexOf("\r\n"))));
System.out.println("line" + line);
sb = new StringBuilder();
}
} catch (SerialPortException ex) {
Logger.getLogger(SerialComm.class.getName()).log(Level.SEVERE, null, ex);
}
}
});
}
} catch (Exception e) {
System.err.println(e.toString());
}
I'm adding the time to the data being read. As Jose mentioned below, I've added tags to the data variables within the arduino code, I'm using: Serial.print("Solar:"); Serial.println(solarData);
Rough code of the JavaFx listener:
serialPort.getLine().addListener((ov, t, t1) -> {
Platform.runLater(()-> {
String [] data = t1.split(":");
try {
//data[0] is the timestamp
//data[1] will contain the label printed by arduino "Solar: data"
switch (data[1]) {
case "Solar":
data[0].replace("Solar:" , "");
solarSeries.getData().add(new XYChart.Data(data[0], data[1]));
break;
case "Temperature":
temperatureSeries.getData().add(new XYChart.Data(data[0], data[1]));
break;
}
Is the reason this code has NullPointerException a result of the String [] data array being uninitialized?
Exception Error
Ports: /dev/tty.usbmodem1411Connected
Exception in thread "EventThread /dev/tty.usbmodem1411" java.lang.NullPointerException
at SerialComm.lambda$connect$0(SerialComm.java:61)
at SerialComm$$Lambda$1/1661773475.serialEvent(Unknown Source)
at jssc.SerialPort$LinuxEventThread.run(SerialPort.java:1299)
The SerialPortEventListener defined in the jssc library allows listening for serial port events. One of those events is the RXCHAR event, that occurs when the Arduino board is sending some data and some bytes are on the input buffer.
event.getEventValue() returns an int with the byte count, and serialPort.readString(event.getEventValue()) get the String format from those bytes.
Note that this method does not return a full line, so you need to listen to carriage return and line feed characters. Once you find "\r\n", you can get the line, and reset the StringBuilder for the next one:
sb.append(serialPort.readString(event.getEventValue()));
String str=sb.toString();
if(str.endsWith("\r\n")){
line.set(str.substring(0,str.indexOf("\r\n")));
sb=new StringBuilder();
}
where line is an observable String:
final StringProperty line=new SimpleStringProperty("");
On the Arduino side, if you want to send values from different sensors at different rates, I suggest you define on the Arduino sketch some identification string for each sensor, and you print for each value the id of its sensor.
For instance, these will be the readings you will get with the serial event listener:
ID1,val1
ID1,val2
ID2,val3
ID1,val4
ID3,val5
...
Finally, on the JavaFX thread, define a listener to changes in line and process the String to get the sensor and the value. Something like this:
serial.getLine().addListener(
(ObservableValue<? extends String> observable, String oldValue, String newValue) -> {
Platform.runLater(()->{
String[] data=newValue.split("\\,");
if(data[0].equals("ID1"){
// add to chart from sensor 1, value data[1];
} else if(data[0].equals("ID2"){
// add to chart from sensor 2, value data[1];
} else if(data[0].equals("ID3"){
// add to chart from sensor 3, value data[1];
}
});
});
Note you need to add Platform.runLater(), since the thread that gets the data from serial port and updates line is not on the JavaFX thread.
From my experience, on the Arduino side, add a comma or something to separate the different values when you print and when you receive that string in Java simply split that string by commas.
String[] stringSeparate = str.split(",");

localhost request packet not recognized by sharppcap?

I did asp.net program using mvc 4. I deployed in iis server as localhost; I want track HTTP Packet so I used SharpPcap.
Here is full code...
namespace CaseStudy
{
class Program
{
static void Main(string[] args)
{
var parmenter = SharpPcap.CaptureDeviceList.Instance;
/*If no device exists, print error */
if (parmenter.Count < 1)
{
Console.WriteLine("No device found on this machine");
return;
}
int i = 0;
Console.WriteLine("Choose Your Devices :");
Console.WriteLine("----------------------");
foreach (PcapDevice dev in parmenter)
{
/* Device Description */
Console.WriteLine("{0}] {1} [MAC:{2}]", i, dev.Interface.FriendlyName, dev.Interface.MacAddress);
i++;
}
Console.WriteLine("----------------------");
//Extract a device from the list
int deviceIndex = -1;
do
{
Console.WriteLine("Enter Your Choice :");
deviceIndex = int.Parse(Console.ReadLine());
} while (!(deviceIndex < parmenter.Count && deviceIndex >= -1));
ICaptureDevice device = parmenter[deviceIndex];
//Register our handler function to the 'packet arrival' event
//device.PcapOnPacketArrival += new SharpPcap.PacketArrivalEventHandler();
device.OnPacketArrival += new SharpPcap.PacketArrivalEventHandler(device_OnPacketArrival);
//Open the device for capturing
//true -- means promiscuous mode
//1000 -- means a read wait of 1000ms
device.Open(DeviceMode.Promiscuous, 1000);
device.Filter = "ip and tcp";
Console.WriteLine("-- Listenning on {0}, hit 'Enter' to stop...", device.MacAddress);
//Start the capturing process
device.StartCapture();
//Wait for 'Enter' from the user.
Console.ReadLine();
//Stop the capturing process
device.StopCapture();
//Close the capturing device
device.Close();
}
private static void device_OnPacketArrival(object sender, CaptureEventArgs e)
{
DateTime time = e.Packet.Timeval.Date;
int len = e.Packet.Data.Length;
byte[] data = e.Packet.Data;
//var packet = TcpPacket.ParsePacket(e.Packet.LinkLayerType, e.Packet.Data);
//Console.WriteLine(e.Packet.LinkLayerType.ToString());
Packet pack = Packet.ParsePacket(e.Packet.LinkLayerType, e.Packet.Data);
if (pack is PacketDotNet.EthernetPacket)
{
var eth = pack.Extract(typeof(EthernetPacket)) as EthernetPacket;
if (len > 100)
{
Console.WriteLine("ETHERNET/INTERNET/HTTP PACKET");
//Console.WriteLine(HttpServerUtility.UrlTokenEncode(eth.Bytes));
Console.WriteLine("{0}-{1}" , eth.DestinationHwAddress, eth.SourceHwAddress);
//Console.WriteLine(eth.PayloadPacket.PayloadPacket.PrintHex());
Console.WriteLine(System.Text.Encoding.UTF8.GetString(eth.Bytes));
}
}
if (pack is PacketDotNet.TcpPacket) {
var tcp = pack.Extract (typeof(TcpPacket)) as TcpPacket;
if (len > 100)
{
//Console.WriteLine("[{0}:{1}:{2}:{3}][{4}][{5}]",
//time.Hour, time.Minute, time.Second, time.Millisecond,
//len, Stringfy.RawPacketToHex(data));
Console.WriteLine("TCP PACKET");
Console.WriteLine(tcp.PrintHex());
//Console.WriteLine(arp.SenderHardwareAddress);
}
}
if (pack is PacketDotNet.InternetPacket)
{
var inet = pack.Extract(typeof(InternetPacket)) as InternetPacket;
if (len > 100)
{
//Console.WriteLine("[{0}:{1}:{2}:{3}][{4}][{5}]",
//time.Hour, time.Minute, time.Second, time.Millisecond,
//len, Stringfy.RawPacketToHex(data));
Console.WriteLine("INTERNET PACKET");
Console.WriteLine(inet.PrintHex());
//Console.WriteLine(arp.SenderHardwareAddress);
}
}
if (pack is PacketDotNet.IpPacket)
{
var ip = pack.Extract(typeof(IpPacket)) as IpPacket;
if (len > 100)
{
//Console.WriteLine("[{0}:{1}:{2}:{3}][{4}][{5}]",
//time.Hour, time.Minute, time.Second, time.Millisecond,
//len, Stringfy.RawPacketToHex(data));
Console.WriteLine("IP PACKET");
Console.WriteLine(ip.PrintHex());
//Console.WriteLine(arp.SenderHardwareAddress);
}
}
}
}
}
this code caputuring remote server http packet like google, stackoverflow, facebook communicate with my system.
However i want track packet with my system only as a localhost.
using
any one can help? please...
It's impossible.
Why?
SharpPcap uses WinPcap and WinPcap extends the system driver to capture packets. According to WinPcap faq Question 13, it's not possible to capture the loopbackdevice aka localhost. It's a limitation of Windows not WinPcap.

Android BLE: writing >20 bytes characteristics missing the last byte array

I have been implementing the module to send the bytes in chunks, 20 bytes each onto the MCU device via BLE. When it comes to writing the bytes more than 60 bytes and so on, the last chunk of the bytes ( usually less than 20 bytes) is often missed. Hence, the MCU device cannot get the checksum and write the value. I have modified the call back to Thread.sleep(200) to change it but it sometimes works on writing 61 bytes or sometimes not. Would you please tell me are there any synchronous method to write the bytes in chunks ? The below is my working :
#Override
public void onCharacteristicWrite(BluetoothGatt gatt,
BluetoothGattCharacteristic characteristic, int status) {
try {
Thread.sleep(300);
if (status != BluetoothGatt.GATT_SUCCESS) {
disconnect();
return;
}
if(status == BluetoothGatt.GATT_SUCCESS) {
System.out.println("ok");
broadcastUpdate(ACTION_DATA_READ, mReadCharacteristic, status);
}
else {
System.out.println("fail");
broadcastUpdate(ACTION_DATA_WRITE, characteristic, status);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public synchronized boolean writeCharacteristicData(BluetoothGattCharacteristic characteristic ,
byte [] byteResult ) {
if (mBluetoothAdapter == null || mBluetoothGatt == null) {
return false;
}
boolean status = false;
characteristic.setValue(byteResult);
characteristic.setWriteType(BluetoothGattCharacteristic.WRITE_TYPE_NO_RESPONSE);
status = mBluetoothGatt.writeCharacteristic(characteristic);
return status;
}
private void sendCommandData(final byte [] commandByte) {
// TODO Auto-generated method stub
if(commandByte.length > 20 ){
final List<byte[]> bytestobeSent = splitInChunks(commandByte);
for(int i = 0 ; i < bytestobeSent.size() ; i ++){
for(int k = 0 ; k < bytestobeSent.get(i).length ; k++){
System.out.println("LumChar bytes : "+ bytestobeSent.get(i)[k] );
}
BluetoothGattService LumService = mBluetoothGatt.getService(A_SERVICE);
if (LumService == null) { return; }
BluetoothGattCharacteristic LumChar = LumService.getCharacteristic(AW_CHARACTERISTIC);
if (LumChar == null) { System.out.println("LumChar"); return; }
//Thread.sleep(500);
writeCharacteristicData(LumChar , bytestobeSent.get(i));
}
}else{
....
You need to wait for the onCharacteristicWrite() callback to be invoked before sending the next write. The typical solution is to make a job queue and pop a job off the queue for each callback you get to onCharacteristicWrite(), onCharacteristicRead(), etc.
In other words, you can't do it in a for loop unfortunately, unless you want to set up some kind of lock that waits for the callback before going on to the next iteration. In my experience a job queue is a cleaner general-purpose solution though.

LwIP Netconn API + FreeRTOS TCP Client Buffer Issue

I've been trying to modify the tcp server example with LwIP in STM32F4DISCOVERY board. I have to write a sender which does not necessarily have to reply server responses. It can send data with 100 ms frequency, for example.
Firstly, the example of TCP server is like this:
static void tcpecho_thread(void *arg)
{
struct netconn *conn, *newconn;
err_t err;
LWIP_UNUSED_ARG(arg);
/* Create a new connection identifier. */
conn = netconn_new(NETCONN_TCP);
if (conn!=NULL) {
/* Bind connection to well known port number 7. */
err = netconn_bind(conn, NULL, DEST_PORT);
if (err == ERR_OK) {
/* Tell connection to go into listening mode. */
netconn_listen(conn);
while (1) {
/* Grab new connection. */
newconn = netconn_accept(conn);
/* Process the new connection. */
if (newconn) {
struct netbuf *buf;
void *data;
u16_t len;
while ((buf = netconn_recv(newconn)) != NULL) {
do {
netbuf_data(buf, &data, &len);
//Incoming package
.....
//Check for data
if (DATA IS CORRECT)
{
//Reply
data = "OK";
len = 2;
netconn_write(newconn, data, len, NETCONN_COPY);
}
} while (netbuf_next(buf) >= 0);
netbuf_delete(buf);
}
/* Close connection and discard connection identifier. */
netconn_close(newconn);
netconn_delete(newconn);
}
}
} else {
printf(" can not bind TCP netconn");
}
} else {
printf("can not create TCP netconn");
}
}
I modified this code to obtain a client version, this is what I've got so far:
static void tcpecho_thread(void *arg)
{
struct netconn *xNetConn = NULL;
struct ip_addr local_ip;
struct ip_addr remote_ip;
int rc1, rc2;
struct netbuf *Gonderilen_Buf = NULL;
struct netbuf *gonderilen_buf = NULL;
void *b_data;
u16_t b_len;
IP4_ADDR( &local_ip, IP_ADDR0, IP_ADDR1, IP_ADDR2, IP_ADDR3 );
IP4_ADDR( &remote_ip, DEST_IP_ADDR0, DEST_IP_ADDR1, DEST_IP_ADDR2, DEST_IP_ADDR3 );
xNetConn = netconn_new ( NETCONN_TCP );
rc1 = netconn_bind ( xNetConn, &local_ip, DEST_PORT );
rc2 = netconn_connect ( xNetConn, &remote_ip, DEST_PORT );
b_data = "+24C"; // Data to be send
b_len = sizeof ( b_data );
while(1)
{
if ( rc1 == ERR_OK )
{
// If button pressed, send data "+24C" to server
if (GPIO_ReadInputDataBit (GPIOA, GPIO_Pin_0) == Bit_SET)
{
Buf = netbuf_new();
netbuf_alloc(Buf, 4); // 4 bytes of buffer
Buf->p->payload = "+24C";
Buf->p->len = 4;
netconn_write(xNetConn, Buf->p->payload, b_len, NETCONN_COPY);
vTaskDelay(100); // To see the result easily in Comm Operator
netbuf_delete(Buf);
}
}
if ( rc1 != ERR_OK || rc2 != ERR_OK )
{
netconn_delete ( xNetConn );
}
}
}
While the writing operation works, netconn_write sends what's on its buffer. It doesnt care whether b_data is NULL or not. I've tested it by adding the line b_data = NULL;
So the resulting output in Comm Operator is like this:
Rec:(02:47:27)+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C+24C
However, I want it to work like this:
Rec:(02:47:22)+24C
Rec:(02:47:27)+24C
Rec:(02:57:12)+24C
Rec:(02:58:41)+24C
The desired write operation happens when I wait for around 8 seconds before I push the button again.
Since netconn_write function does not allow writing to a buffer, I'm not able to clear it. And netconn_send is only allowed for UDP connections.
I need some guidance to understand the problem and to generate a solution for it.
Any help will be greately appreciated.
It's just a matter of printing the result in the correct way.
You can try to add this part of code before writing in the netbuf data structure:
char buffer[20];
sprintf(buffer,"24+ \n");
Buf->p->payload = "+24C";
I see one or two problems in your code, depending on what you want it exactly to do. First of all, you're not sending b_data at all, but a constant string:
b_data = "+24C"; // Data to be send
and then
Buf->p->payload = "+24C";
Buf->p->len = 4;
netconn_write(xNetConn, Buf->p->payload, b_len, NETCONN_COPY);
b_data is not anywhere mentioned there. What is sent is the payload. Try Buf->p->payload = b_data; if it's what you want to achieve.
Second, if you want the +24C text to be sent only once when you push the button, you'll have to have a loop to wait for the button to open again before continuing the loop, or it will send +24C continuously until you stop pushing the button. Something in this direction:
while (GPIO_ReadInputDataBit (GPIOA, GPIO_Pin_0) == Bit_SET) {
vTaskDelay(1);
}

How to process raw UDP packets so that they can be decoded by a decoder filter in a directshow source filter

Long Story:
There is an H264/MPEG-4 Source
I can able to connect this source with RTSP protocol.
I can able to get raw UDP packets with RTP protocol.
Then send those raw UDP packets to a Decoder[h264/mpeg-4] [DS Source Filter]
But those "raw" UDP packets can not be decoded by the Decoder[h264/mpeg-4] filter
Shortly:
How do I process those raw UDP data in order to be decodable by H264/ MPEG-4 decoder filter? Can any one clearly identify steps I have to do with H264/MPEG stream?
Extra Info:
I am able to do this with FFmpeg... But I can not really figure out how FFmpeg processes the raw data so that is decodable by a decoder.
Peace of cake!
1. Get the data
As I can see, you already know how to do that (start RTSP session, SETUP a RTP/AVP/UDP;unicast; transport, and get user datagrams)... but if you are in doubt, ask.
No matter the transport (UDP or TCP) the data format is mainly the same:
RTP data: [RTP Header - 12bytes][Video data]
UDP: [RTP Data]
TCP: [$ - 1byte][Transport Channel - 1byte][RTP data length - 2bytes][RTP
data]
So to get data from UDP, you only have to strip off first 12 bytes which represent RTP header. But beware, you need it to get video timing information, and for MPEG4 the packetization information!
For TCP you need to read first byte until you get byte $. Then read next byte, that will be transport channel that the following data belongs (when server responds on SETUP request it says: Transport: RTP/AVP/TCP;unicast;interleaved=0-1 this means that VIDEO DATA will have TRANSPORT_CHANNEL=0 and VIDEO RTCP DATA will have TRANSPORT_CHANNEL=1). You want to get VIDEO DATA, so we expect 0... then read one short (2 byte) that represents the length of the RTP data that follows, so read that much bytes, and now do the same as for UDP.
2. Depacketize data
H264 and MPEG4 data are usually packetized (in SDP there is packetization-mode parameter that can have values 0, 1 and 2 what each of them means, and how to depacketize it, you can see HERE) because there is a certain network limit that one endpoint can send through TCP or UDP that is called MTU. It is usually 1500 bytes or less. So if the video frame is larger than that (and it usually is), it needs to be fragmented (packetized) into MTU sized fragments. This can be done by encoder/streamer on TCP and UDP transport, or you can relay on IP to fragment and reassemble video frame on the other side... the first is much better if you want to have a smooth error prone video over UDP and TCP.
H264: To check does the RTP data (which arrived over UDP, or interleaved over TCP) hold fragment of one larger H264 video frame, you must know how the fragment looks when it is packetized:
H264 FRAGMENT
First byte: [ 3 NAL UNIT BITS | 5 FRAGMENT TYPE BITS]
Second byte: [ START BIT | END BIT | RESERVED BIT | 5 NAL UNIT BITS]
Other bytes: [... VIDEO FRAGMENT DATA...]
Now, get the first VIDEO DATA in byte array called Data and get the following info:
int fragment_type = Data[0] & 0x1F;
int nal_type = Data[1] & 0x1F;
int start_bit = Data[1] & 0x80;
int end_bit = Data[1] & 0x40;
If fragment_type == 28 then video data following it represents the video frame fragment. Next check is start_bit set, if it is, then that fragment is the first one in a sequence. You use it to reconstruct IDR's NAL byte by taking the first 3 bits from first payload byte (3 NAL UNIT BITS) and combine them with last 5 bits from second payload byte (5 NAL UNIT BITS) so you would get a byte like this [3 NAL UNIT BITS | 5 NAL UNIT BITS]. Then write that NAL byte first into a clear buffer with VIDEO FRAGMENT DATA from that fragment.
If start_bit and end_bit are 0 then just write the VIDEO FRAGMENT DATA (skipping first two payload bytes that identify the fragment) to the buffer.
If start_bit is 0 and end_bit is 1, that means that it is the last fragment, and you just write its VIDEO FRAGMENT DATA (skipping the first two bytes that identify the fragment) to the buffer, and now you have your video frame reconstructed!
Bare in mind that the RTP data holds RTP header in first 12 bytes, and that if the frame is fragmented, you never write first two bytes in the defragmentation buffer, and that you need to reconstruct NAL byte and write it first. If you mess something up here, the picture will be partial (half of it will be gray or black or you will see artifacts).
MPEG4:
This is an easy one. You need to check the MARKER_BIT in RTP Header. That byte is set (1) if the video data represents the whole video frame, and it is 0 of the video data is one video frame fragment. So to depacketize that, you need to see what the MARKER_BIT is. If it is 1 thats it, just read the video data bytes.
WHOLE FRAME:
[MARKER = 1]
PACKETIZED FRAME:
[MARKER = 0], [MARKER = 0], [MARKER = 0], [MARKER = 1]
First packet that has MARKER_BIT=0 is the first video frame fragment, all others that follow including the first one with MARKER_BIT=1 are fragments of the same video frame. So what you need to do is:
Until MARKER_BIT=0 place VIDEO DATA in depacketization buffer
Place next VIDEO DATA where MARKER_BIT=1 into the same buffer
Depacketization buffer now holds one whole MPEG4 frame
3. Process data for decoder (NAL byte stream)
When you have depacketized video frames, you need to make NAL byte stream. It has the following format:
H264: 0x000001[SPS], 0x000001[PPS], 0x000001[VIDEO FRAME], 0x000001...
MPEG4: 0x000001[Visual Object Sequence Start], 0x000001[VIDEO FRAME]
RULES:
Every frame MUST be prepended with 0x000001 3 byte code no matter the codec
Every stream MUST start with CONFIGURATION INFO, for H264 that are SPS and PPS frames in that order (sprop-parameter-sets in SDP), and for MPEG4 the VOS frame (config parameter in SDP)
So you need to build a config buffer for H264 and MPEG4 prepended with 3 bytes 0x000001, send it first, and then prepend each depacketized video frame with the same 3 bytes and send that to the decoder.
If you need any clarifying just comment... :)
I have an implementation of this # https://net7mma.codeplex.com/
Here is the relevant code
/// <summary>
/// Implements Packetization and Depacketization of packets defined in <see href="https://tools.ietf.org/html/rfc6184">RFC6184</see>.
/// </summary>
public class RFC6184Frame : Rtp.RtpFrame
{
/// <summary>
/// Emulation Prevention
/// </summary>
static byte[] NalStart = { 0x00, 0x00, 0x01 };
public RFC6184Frame(byte payloadType) : base(payloadType) { }
public RFC6184Frame(Rtp.RtpFrame existing) : base(existing) { }
public RFC6184Frame(RFC6184Frame f) : this((Rtp.RtpFrame)f) { Buffer = f.Buffer; }
public System.IO.MemoryStream Buffer { get; set; }
/// <summary>
/// Creates any <see cref="Rtp.RtpPacket"/>'s required for the given nal
/// </summary>
/// <param name="nal">The nal</param>
/// <param name="mtu">The mtu</param>
public virtual void Packetize(byte[] nal, int mtu = 1500)
{
if (nal == null) return;
int nalLength = nal.Length;
int offset = 0;
if (nalLength >= mtu)
{
//Make a Fragment Indicator with start bit
byte[] FUI = new byte[] { (byte)(1 << 7), 0x00 };
bool marker = false;
while (offset < nalLength)
{
//Set the end bit if no more data remains
if (offset + mtu > nalLength)
{
FUI[0] |= (byte)(1 << 6);
marker = true;
}
else if (offset > 0) //For packets other than the start
{
//No Start, No End
FUI[0] = 0;
}
//Add the packet
Add(new Rtp.RtpPacket(2, false, false, marker, PayloadTypeByte, 0, SynchronizationSourceIdentifier, HighestSequenceNumber + 1, 0, FUI.Concat(nal.Skip(offset).Take(mtu)).ToArray()));
//Move the offset
offset += mtu;
}
} //Should check for first byte to be 1 - 23?
else Add(new Rtp.RtpPacket(2, false, false, true, PayloadTypeByte, 0, SynchronizationSourceIdentifier, HighestSequenceNumber + 1, 0, nal));
}
/// <summary>
/// Creates <see cref="Buffer"/> with a H.264 RBSP from the contained packets
/// </summary>
public virtual void Depacketize() { bool sps, pps, sei, slice, idr; Depacketize(out sps, out pps, out sei, out slice, out idr); }
/// <summary>
/// Parses all contained packets and writes any contained Nal Units in the RBSP to <see cref="Buffer"/>.
/// </summary>
/// <param name="containsSps">Indicates if a Sequence Parameter Set was found</param>
/// <param name="containsPps">Indicates if a Picture Parameter Set was found</param>
/// <param name="containsSei">Indicates if Supplementatal Encoder Information was found</param>
/// <param name="containsSlice">Indicates if a Slice was found</param>
/// <param name="isIdr">Indicates if a IDR Slice was found</param>
public virtual void Depacketize(out bool containsSps, out bool containsPps, out bool containsSei, out bool containsSlice, out bool isIdr)
{
containsSps = containsPps = containsSei = containsSlice = isIdr = false;
DisposeBuffer();
this.Buffer = new MemoryStream();
//Get all packets in the frame
foreach (Rtp.RtpPacket packet in m_Packets.Values.Distinct())
ProcessPacket(packet, out containsSps, out containsPps, out containsSei, out containsSlice, out isIdr);
//Order by DON?
this.Buffer.Position = 0;
}
/// <summary>
/// Depacketizes a single packet.
/// </summary>
/// <param name="packet"></param>
/// <param name="containsSps"></param>
/// <param name="containsPps"></param>
/// <param name="containsSei"></param>
/// <param name="containsSlice"></param>
/// <param name="isIdr"></param>
internal protected virtual void ProcessPacket(Rtp.RtpPacket packet, out bool containsSps, out bool containsPps, out bool containsSei, out bool containsSlice, out bool isIdr)
{
containsSps = containsPps = containsSei = containsSlice = isIdr = false;
//Starting at offset 0
int offset = 0;
//Obtain the data of the packet (without source list or padding)
byte[] packetData = packet.Coefficients.ToArray();
//Cache the length
int count = packetData.Length;
//Must have at least 2 bytes
if (count <= 2) return;
//Determine if the forbidden bit is set and the type of nal from the first byte
byte firstByte = packetData[offset];
//bool forbiddenZeroBit = ((firstByte & 0x80) >> 7) != 0;
byte nalUnitType = (byte)(firstByte & Common.Binary.FiveBitMaxValue);
//o The F bit MUST be cleared if all F bits of the aggregated NAL units are zero; otherwise, it MUST be set.
//if (forbiddenZeroBit && nalUnitType <= 23 && nalUnitType > 29) throw new InvalidOperationException("Forbidden Zero Bit is Set.");
//Determine what to do
switch (nalUnitType)
{
//Reserved - Ignore
case 0:
case 30:
case 31:
{
return;
}
case 24: //STAP - A
case 25: //STAP - B
case 26: //MTAP - 16
case 27: //MTAP - 24
{
//Move to Nal Data
++offset;
//Todo Determine if need to Order by DON first.
//EAT DON for ALL BUT STAP - A
if (nalUnitType != 24) offset += 2;
//Consume the rest of the data from the packet
while (offset < count)
{
//Determine the nal unit size which does not include the nal header
int tmp_nal_size = Common.Binary.Read16(packetData, offset, BitConverter.IsLittleEndian);
offset += 2;
//If the nal had data then write it
if (tmp_nal_size > 0)
{
//For DOND and TSOFFSET
switch (nalUnitType)
{
case 25:// MTAP - 16
{
//SKIP DOND and TSOFFSET
offset += 3;
goto default;
}
case 26:// MTAP - 24
{
//SKIP DOND and TSOFFSET
offset += 4;
goto default;
}
default:
{
//Read the nal header but don't move the offset
byte nalHeader = (byte)(packetData[offset] & Common.Binary.FiveBitMaxValue);
if (nalHeader > 5)
{
if (nalHeader == 6)
{
Buffer.WriteByte(0);
containsSei = true;
}
else if (nalHeader == 7)
{
Buffer.WriteByte(0);
containsPps = true;
}
else if (nalHeader == 8)
{
Buffer.WriteByte(0);
containsSps = true;
}
}
if (nalHeader == 1) containsSlice = true;
if (nalHeader == 5) isIdr = true;
//Done reading
break;
}
}
//Write the start code
Buffer.Write(NalStart, 0, 3);
//Write the nal header and data
Buffer.Write(packetData, offset, tmp_nal_size);
//Move the offset past the nal
offset += tmp_nal_size;
}
}
return;
}
case 28: //FU - A
case 29: //FU - B
{
/*
Informative note: When an FU-A occurs in interleaved mode, it
always follows an FU-B, which sets its DON.
* Informative note: If a transmitter wants to encapsulate a single
NAL unit per packet and transmit packets out of their decoding
order, STAP-B packet type can be used.
*/
//Need 2 bytes
if (count > 2)
{
//Read the Header
byte FUHeader = packetData[++offset];
bool Start = ((FUHeader & 0x80) >> 7) > 0;
//bool End = ((FUHeader & 0x40) >> 6) > 0;
//bool Receiver = (FUHeader & 0x20) != 0;
//if (Receiver) throw new InvalidOperationException("Receiver Bit Set");
//Move to data
++offset;
//Todo Determine if need to Order by DON first.
//DON Present in FU - B
if (nalUnitType == 29) offset += 2;
//Determine the fragment size
int fragment_size = count - offset;
//If the size was valid
if (fragment_size > 0)
{
//If the start bit was set
if (Start)
{
//Reconstruct the nal header
//Use the first 3 bits of the first byte and last 5 bites of the FU Header
byte nalHeader = (byte)((firstByte & 0xE0) | (FUHeader & Common.Binary.FiveBitMaxValue));
//Could have been SPS / PPS / SEI
if (nalHeader > 5)
{
if (nalHeader == 6)
{
Buffer.WriteByte(0);
containsSei = true;
}
else if (nalHeader == 7)
{
Buffer.WriteByte(0);
containsPps = true;
}
else if (nalHeader == 8)
{
Buffer.WriteByte(0);
containsSps = true;
}
}
if (nalHeader == 1) containsSlice = true;
if (nalHeader == 5) isIdr = true;
//Write the start code
Buffer.Write(NalStart, 0, 3);
//Write the re-construced header
Buffer.WriteByte(nalHeader);
}
//Write the data of the fragment.
Buffer.Write(packetData, offset, fragment_size);
}
}
return;
}
default:
{
// 6 SEI, 7 and 8 are SPS and PPS
if (nalUnitType > 5)
{
if (nalUnitType == 6)
{
Buffer.WriteByte(0);
containsSei = true;
}
else if (nalUnitType == 7)
{
Buffer.WriteByte(0);
containsPps = true;
}
else if (nalUnitType == 8)
{
Buffer.WriteByte(0);
containsSps = true;
}
}
if (nalUnitType == 1) containsSlice = true;
if (nalUnitType == 5) isIdr = true;
//Write the start code
Buffer.Write(NalStart, 0, 3);
//Write the nal heaer and data data
Buffer.Write(packetData, offset, count - offset);
return;
}
}
}
internal void DisposeBuffer()
{
if (Buffer != null)
{
Buffer.Dispose();
Buffer = null;
}
}
public override void Dispose()
{
if (Disposed) return;
base.Dispose();
DisposeBuffer();
}
//To go to an Image...
//Look for a SliceHeader in the Buffer
//Decode Macroblocks in Slice
//Convert Yuv to Rgb
}
There are also implementations for various other RFC's which help getting the media to play in a MediaElement or in other software or just saving it to disk.
Writing to a container format is underway.
With UDP packets you receive bits of H.264 stream which you are expected to depacketize into H.264 NAL Units, which, in their turn, you are typically pushing into DirectShow pipeline from your filter.
The NAL Units will be formatted as DirectShow media samples, and possibly also, as a part of media type (SPS/PPS NAL Units).
Depacketization steps are described in RFC 6184 - RTP Payload Format for H.264 Video. This is payload part of RTP traffic, defined by RFC 3550 - RTP: A Transport Protocol for Real-Time Applications.
Clear, but not quite short though.
I have recently streamed h264 and encountered similar issues. Here is my depacketizer class. I wrote a long blog post to save other time in understanding this process http://cagneymoreau.com/stream-video-android/
Package networking;
import org.apache.commons.logging.Log;
import utility.Debug;
import java.io.Console;
import java.io.IOException;
import java.io.PipedInputStream;
import java.io.PipedOutputStream;
import java.util.*;
/**
* This class is used to re-assemble udp packets filled with rtp packets into network abstraction layer units
*
*/
public class VideoDecoder {
private static final String TAG = "VideoDecoder";
private PipedOutputStream pipedOutputStream; //this is where we pass the nalus we extract
private Map<Integer, NaluBuffer> assemblyLine = new HashMap<>(); // This holds nalus we are building. Ideally only 1 and if it exceeds 3 there might be a problem
private final int thresh = 30;
private int assemblyThresh = thresh;
private final int trashDelay = 3000;
//unpacking
private final static int HEADER_SIZE = 12;
private final static int rtpByteHeader1 = 128; //rtp header byte 1 should always equal
private final static int typeSPSPPS = 24;
private final static byte typeFUA = 0b01111100;
private final static byte[] startcode = new byte[] { 0x00, 0x00, 0x00, 0x01};
//experimental bools that can mix piped data
private boolean annexB = true; //remove lengths and dd aprefix
private boolean mixed = false; //keep lengths and add pefix dont use with annexb
private boolean prelStyle = false; //include avcc 6 byte data
private boolean directPipe = false; //send in the data with no editing
public VideoDecoder(PipedOutputStream pipedOutputStream)
{
this.pipedOutputStream = pipedOutputStream;
}
// raw udp rtp packets come in here from the the udp.packet.getdata filled at socket
public void addPacket(byte[] incoming)
{
if (directPipe){
transferTOFFmpeg(incoming);
return;
}
if (incoming[0] != (byte) rtpByteHeader1){
System.out.println(TAG + " rtpHeaderError " + Byte.toString(incoming[0]));
}
if (incoming[1] == typeSPSPPS){
System.out.println(TAG + "addPacket type: 24" );
unpackType24(incoming);
}
else if (incoming[1] == typeFUA){
//System.out.println(TAG + "addPacket type: 28" );
unpackType28(incoming);
}
else if (incoming[1] == 1){
System.out.println(TAG + "addPacket type: 1" );
unpackType1(incoming);
}else if (incoming[1] == 5){
System.out.println(TAG + "addPacket type: 5" );
unpackType5(incoming);
}else{
System.out.println(TAG + "addPacket unknown type - ERROR " + String.valueOf(incoming[1]) );
}
}
//SPS & PPS this will get hit before every type 5
//im not rtp compliant.
// length sps length pps prel = 6length
// LL SPSPSPSPSP LL PPSPPSPPSPPS 123456
private void unpackType24(byte[] twentyFour)
{
if (annexB){
int sp = (twentyFour[13] << 8 | twentyFour[14] & 0XFF);
int pp = (twentyFour[sp + 15] << 8 | twentyFour[sp + 16] & 0XFF);
byte[] sps = new byte[sp];
byte[] pps = new byte[pp];
System.arraycopy(twentyFour,15, sps,0,sp);
System.arraycopy(twentyFour,sp + 17, pps,0,pps.length);
transferTOFFmpeg(sps);
transferTOFFmpeg(pps);
}else if (prelStyle)
{
//Debug.debugHex("unpack24 " , twentyFour, twentyFour.length);
int spsl = (twentyFour[14] & 0xff) + 2;
int ppsl = (twentyFour[14+ spsl] & 0xff) +2;
int prel = 6;
byte[] buf = new byte[spsl + ppsl + prel]; //rtp header length - type + experimental data
System.arraycopy(twentyFour, 13, buf, 6,spsl + ppsl);
System.arraycopy(twentyFour, spsl + ppsl + 13, buf,0, 6);
transferTOFFmpeg(buf);
}else{
int spsl = (twentyFour[14] & 0xff) + 2;
int ppsl = (twentyFour[14+ spsl] & 0xff) +2;
byte[] buf = new byte[spsl + ppsl ]; //rtp header length - type + experimental data
System.arraycopy(twentyFour, 13, buf, 0,spsl + ppsl);
//System.arraycopy(twentyFour, spsl + ppsl + 13, buf,0, 6);
transferTOFFmpeg(buf);
}
}
//Single NON IDR Nal - This seems liekly to never occur
private void unpackType1(byte[] one)
{
byte[] buf = new byte[one.length-12];
System.arraycopy(one, 12, buf, 0,buf.length);
transferTOFFmpeg(buf);
}
//Single IDR Nal - This seems likely to never occur
private void unpackType5(byte[] five)
{
byte[] buf = new byte[five.length-12];
System.arraycopy(five, 12, buf, 0,buf.length);
transferTOFFmpeg(buf);
}
// Unpack either any split up nalu - This will get 99.999999 of nalus
synchronized private void unpackType28(byte[] twentyEight)
{
//Debug.deBugHexTrailing("unpack 28 ", twentyEight, 20 );
int ts = (twentyEight[4] << 24 | twentyEight[5] << 16 | twentyEight[6] << 8 | twentyEight[7] & 0XFF); //each nalu has a unique timestamp
//int seqN = (twentyEight[2] << 8 | twentyEight[3] & 0xFF); //each part of that nalu is numbered in order.
// numbers are from every packet ever. not this nalu. no zero or 1 start
//check if already building this nalu
if (assemblyLine.containsKey(ts)){
assemblyLine.get(ts).addPiece(twentyEight);
}
//add a new nalu
else
{
assemblyLine.put(ts, new NaluBuffer(ts, twentyEight));
}
}
//this will transfer the assembled nal units to the media codec/trans-coder/decoder/whatever?!?
private void transferTOFFmpeg(byte[] nalu)
{
Debug.debugHex("VideoDecoder transferTOFFmpg -> ", nalu, 30);
try{
if (annexB || mixed){
pipedOutputStream.write(startcode);
}
pipedOutputStream.write(nalu,0,nalu.length);
}catch (IOException ioe){
System.out.println(TAG + " transferTOFFmpeg - unable to lay pipe ;)");
}
if (assemblyLine.size() > assemblyThresh){
System.err.println(TAG + "transferToFFmpeg -> assemblyLine grows to a count of " + String.valueOf(assemblyLine.size()));
assemblyThresh += thresh;
}
}
private void clearList()
{
String n = "\n";
List<Integer> toremove = new ArrayList<>();
StringBuilder description = new StringBuilder();
for(Map.Entry<Integer, NaluBuffer> entry : assemblyLine.entrySet()) {
Integer key = entry.getKey();
NaluBuffer value = entry.getValue();
if (value.age < System.currentTimeMillis() - trashDelay){
toremove.add(key);
description
.append(String.valueOf(value.timeStamp)).append(" timestamp").append(n)
.append(String.valueOf(value.payloadType)).append(" type").append(n)
.append(String.valueOf(value.count)).append(" count").append(n)
.append(String.valueOf(value.start)).append(" ").append(String.valueOf(value.finish)).append(n)
.append(n);
}
}
for (Integer i :
toremove) {
assemblyLine.remove(i);
}
if (toremove.size() > 0){
System.out.println(TAG + " cleaList current size : " + String.valueOf(assemblyLine.size()) + n + "deleting: " + toremove.size() + n + description);
assemblyThresh = thresh;
}
}
private void deletMe(int key)
{
assemblyLine.remove(key);
if (assemblyLine.size() > 3){
clearList();
}
}
/*
Once a multipart FU-A rtp packet is found it is added to a hashset containing this class
Here we do everything needed to either complete assembly and send or destroy if not completed due to presumable packet loss
** Example Packet From First FU-A with SER = 100 **
description-> |-------RTP--HEADER------| |FU-A--HEADER| |-NAL--HEADER|
byte index-> 0|1|2|3|4|5|6|7|8|9|10|11| 12|13 14|15|16|17|18
| | | | | | | | |S S R C| | |__header | | | | |__type
| | | | |TIMESTM| |__indicator | | | |__length
| | | |__sequence number | | |__length
| | |____sequence number | |___length
| |__payload |__length
|___version padding extension
*/
private class NaluBuffer
{
private final static String TAG = "NaluBuffer";
//private static final int BUFF_SIZE = 200005; // this is the max nalu size + 5 byte header we searched for in our androids nalu search
long age;
//List<String> sizes = new ArrayList<>();
NaluePiece[] buffer = new NaluePiece[167];
int count = 0;
int start;
int finish;
int timeStamp; //from rtp packets.
int completedSize; //this is number of nalu
int payloadType; //nalu type 5 or 1
int byteLength;
int naluByteArrayLength = 0;
//if it doesnt exist
NaluBuffer(int timeStamp, byte[] piece)
{
//System.out.println(TAG + " constructor " + String.valueOf(timeStamp) );
this.timeStamp = timeStamp;
age = System.currentTimeMillis();
addPieceToBuffer(piece);
count++;
}
//adding another piece
synchronized public void addPiece(byte[] piece)
{
//System.out.println(TAG + " addPiece " + String.valueOf(timeStamp));
addPieceToBuffer(piece);
count++;
}
//add to buffer. incoming data is still raw rtp packet
private void addPieceToBuffer(byte[] piece)
{
//System.out.println(TAG + " addPiecetobuffer " + String.valueOf(piece[13]));
int seqN = (piece[2] << 8 | piece[3] & 0xFF);
//add to buffer
buffer[count] = new NaluePiece(seqN, Arrays.copyOfRange(piece, 14,piece.length)); // 14 because we skip rtp header of 12 and fu-a header of 2
int in = ( piece.length - 14); //we save each byte[] copied size so we can easily construct a completed array later
//sizes.add(String.valueOf(in));
naluByteArrayLength += in;
//check if first or last, completed size type etc
if ((start == 0) && (piece[13] & 0b11000000) == 0b10000000){
//start of nalu
start = (piece[2] << 8 | piece[3] & 0xFF);
//type
payloadType = (piece[13] & 0b00011111); //could have used [18] //get type
byteLength = (piece[17]&0xFF | (piece[16]&0xFF)<<8 | (piece[15]&0xFF)<<16 | (piece[14]&0xFF)<<24); //get the h264 encoded length
byteLength += 4; //Now add 4 bytes for the length encoding itself
if (payloadType == 1 || payloadType == 5 && byteLength < 200000){
}else{
System.err.println(TAG + " addpiecetobuffer type: " + String.valueOf(payloadType) + "length: " + String.valueOf(byteLength) );
}
//System.out.println(TAG + " addpiecetobuffer start " + String.valueOf(start) + " type " + String.valueOf(payloadType));
}else if ((finish == 0) && (piece[13] & 0b11000000) == 0b01000000){
//end of nalu
finish = (piece[2] << 8 | piece[3] & 0xFF);
//System.out.println(TAG + " addpiecetobuffer finish " + String.valueOf(finish));
}
if (finish != 0 && start != 0 && completedSize == 0){
//completed size in packet sequnce number NOT in byte length
completedSize = finish - start;
//System.out.println(TAG + " addpiecetobuffer completedsize " + String.valueOf(completedSize));
//originally put in bytes but thats not what I was counting ...duh!
// (piece[14] <<24 | piece[15] << 16 | piece[16] << 8 | piece[17] & 0xFF);
}
//check if complete
if (completedSize != 0 && count == completedSize){
assembleDeliver();
}
}
// we have every sequence number accounted for.
// reconstruct the nalu and send it to the decoder
private void assembleDeliver()
{
count++; //make up for the ount that didn't get called following addpiecetobuffer method
// System.out.println(TAG + " assembleDeliver " + String.valueOf(timeStamp));
//create a new array the exact length needed and sort each nalu by sequence number
NaluePiece[] newbuf = new NaluePiece[count];
System.arraycopy(buffer,0,newbuf,0, count);
Arrays.sort(newbuf);
// TODO: 9/28/2018 we have no gaps in data here checking newbuff !!!!!
//this will be an array we feed/pipe to our videoprocessor
byte[] out;
if (annexB){
out = new byte[naluByteArrayLength-4]; //remove the 4 bytes of length
int tally = 0;
int destPos = 0;
int src = 4;
for (int i = 0; i < count; i++) {
if (i == 1){
src = 0;
}
tally += newbuf[i].piece.length;
System.arraycopy(newbuf[i].piece, src, out, destPos, newbuf[i].piece.length - src);
//Debug.fillCompleteNalData(out, destPos, newbuf[i].piece.length);
destPos += newbuf[i].piece.length - src;
}
/*
StringBuilder sb = new StringBuilder();
sb.append("VideoDecoder assembleDeliver out.length ").append(String.valueOf(out.length))
.append(" destPos ").append(String.valueOf(destPos)).append(" tally ").append(String.valueOf(tally))
.append(" count ").append(String.valueOf(count)).append(" obuf ").append(String.valueOf(completedSize));
for (String s :
sizes) {
sb.append(s).append(" ");
}
System.out.println(sb.toString());
*/
}else{
out = new byte[naluByteArrayLength];
int destPos = 0;
for (int i = 0; i < count; i++) {
System.arraycopy(newbuf[i].piece, 0, out, destPos, newbuf[i].piece.length);
destPos += newbuf[i].piece.length;
}
}
if (naluByteArrayLength != byteLength){
System.err.println(TAG + " assembleDeliver -> ERROR - h264 encoded length: " + String.valueOf(byteLength) + " and byte length found: " + String.valueOf(naluByteArrayLength) + " do not match");
}
// TODO: 9/28/2018 we have gaps in data here
//Debug.checkNaluData(out);
transferTOFFmpeg(out);
deletMe(timeStamp);
}
}
//This class stores the payload and ordering info
private class NaluePiece implements Comparable<NaluePiece>
{
int sequenceNumber; //here is the number we can access to order them
byte[] piece; //here we store the raw payload data to be aggregated
public NaluePiece(int sequenceNumber, byte[] piece)
{
this.sequenceNumber = sequenceNumber;
this.piece = piece;
//Debug.checkNaluPieceData(piece);
}
#Override
public int compareTo(NaluePiece o) {
return Integer.compare(this.sequenceNumber, o.sequenceNumber);
}
}
}

Resources