Video broadcast using NDI SDK 4.5 in iOS 13 not working. Receiver in LAN does not receive any video packets - ios13

I have been trying to use NDI SDK 4.5, in a Objective-C iOS-13 app, to broadcast camera capture from iPhone device.
My sample code is in public Github repo: https://github.com/bharatbiswal/CameraExampleObjectiveC
Following is how I send CMSampleBufferRef sampleBuffer:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = VIDEO_CAPTURE_WIDTH;
video_frame.yres = VIDEO_CAPTURE_HEIGHT;
video_frame.FourCC = NDIlib_FourCC_type_UYVY; // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
video_frame.line_stride_in_bytes = VIDEO_CAPTURE_WIDTH * VIDEO_CAPTURE_PIXEL_SIZE;
video_frame.p_data = CVPixelBufferGetBaseAddress(pixelBuffer);
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame);
I have been using "NewTek NDI Video Monitor" to receive the video from network. However, even though it shows as source, the video does not play.
Has anyone used NDI SDK in iOS to build broadcast sender or receiver functionalities? Please help.

You should use kCVPixelFormatType_32BGRA in video settings. And NDIlib_FourCC_type_BGRA as FourCC in NDIlib_video_frame_v2_t.

Are you sure about your VIDEO_CAPTURE_PIXEL_SIZE ?
When I worked with NDI on macos I had the same black screen problem and it was due to a wrong line stride.
Maybe this can help : https://developer.apple.com/documentation/corevideo/1456964-cvpixelbuffergetbytesperrow?language=objc ?
Also it seems the pixel formats from core video and NDI don't match.
On the core video side you are using Bi-Planar Y'CbCr 8-bit 4:2:0, and on the NDI side you are using NDIlib_FourCC_type_UYVY which is Y'CbCr 4:2:2.
I cannot find any Bi-Planar Y'CbCr 8-bit 4:2:0 pixel format on the NDI side.
You may have more luck using the following combination:
core video: https://developer.apple.com/documentation/corevideo/1563591-pixel_format_identifiers/kcvpixelformattype_420ypcbcr8planarfullrange?language=objc
NDI: NDIlib_FourCC_type_YV12
Hope this helps!

In my experience, you have two mistake. To use CVPixelBuffer's CVPixelBufferGetBaseAddress, the CVPixelBufferLockBaseAddress method must be called first. Otherwise, it returns a null pointer.
https://developer.apple.com/documentation/corevideo/1457128-cvpixelbufferlockbaseaddress?language=objc
Secondly, NDI does not support YUV420 biplanar. (The default format for iOS cameras.) More precisely, NDI only accepts one data pointer. In other words, you have to merge the biplanar memory areas into one, and then pass it in NV12 format. See the NDI document for details.
So your code should look like this: And if sending asynchronously instead of NDIlib_send_send_video_v2, a strong reference to the transferred memory area must be maintained until the transfer operation by the NDI library is completed.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = (int)CVPixelBufferGetWidth(pixelBuffer);
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
NDIlib_FourCC_video_type_e ndiVideoFormat;
uint8_t* pixelData;
int stride;
if (pixelFormat == kCVPixelFormatType_32BGRA) {
ndiVideoFormat = NDIlib_FourCC_type_BGRA;
pixelData = (uint8_t*)CVPixelBufferGetBaseAddress(pixelBuffer); // Or copy for asynchronous transmit.
stride = width * 4;
} else if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
ndiVideoFormat = NDIlib_FourCC_type_NV12;
pixelData = (uint8_t*)malloc(width * height * 1.5);
uint8_t* yPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int yPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int ySize = yPlaneBytesPerRow * height;
uint8_t* uvPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
int uvPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
int uvSize = uvPlaneBytesPerRow * height;
stride = yPlaneBytesPerRow;
memcpy(pixelData, yPlane, ySize);
memcpy(pixelData + ySize, uvPlane, uvSize);
} else {
return;
}
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = width;
video_frame.yres = height;
video_frame.FourCC = ndiVideoFormat;
video_frame.line_stride_in_bytes = stride;
video_frame.p_data = pixelData;
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame); // synchronous sending.
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
// For synchrnous sending case. Free data or use pre-allocated memory.
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
free(pixelData);
}

Related

JavaFx media bytes

I want to create a soundwave in my java programm from an mp3 file. I researched and found out, that for wav-files I need to use the AudioInputStream and calculate an byte array... From mp3-File I am using JavaFX media and media-player. Are the bytes from the Inputstream the same like from the Javafx media.getSource().getBytes(); ? An AudioInputStream cant read mp3...
Or how am I supposed to get the values for an mp3 file for soundwave?
Byte from AudioInputStream:
AudioInputStream audioInputStream;
try {
audioInputStream = AudioSystem.getAudioInputStream(next);
int frameLength = (int) audioInputStream.getFrameLength();
int frameSize = (int) audioInputStream.getFormat().getFrameSize();
byte[] bytes = new byte[frameLength * frameSize];
g2.setColor(Color.MAGENTA);
for(int p = 0; p < bytes.length; p++){
g2.fillRect(20 + (p * 3), 50, 2, bytes[p]);
}
} catch (UnsupportedAudioFileException | IOException e) {
e.printStackTrace();
}
And from JavaFX:
Media media;
MediaPlayer player;
media = new Media("blablafile");
player = new Mediaplayer(media);
byte[] bytes = media.getSource().getBytes();
The JavaFX Media API does not provide much low-level support as of Java 10. It seems to be designed with only the necessary features to play media, not manipulate it significantly.
That being said, you might want to look at AudioSpectrumListener. I can't promise it will give you what you want (I'm not familiar with computer-audio concepts) but it may allow you to create your sound-wave; at least a crude representation.
You use an AudioSpectrumListener with a MediaPlayer using the corresponding property.
If your calculations don't have to be in real time then you can do them ahead of time using:
byte[] bytes = URI.create(media.getSource()).toURL().openStream().readAllBytes();
Note that if the media is remote, however, that you will end up downloading the bytes twice; once to get the bytes for your sound-wave and again when actually playing the media with a MediaPlayer.
Also, you'll want to do the above on a background thread and not the JavaFX Application thread to avoid the possibility of freezing the UI.

How to change volume of an audio AVPacket

I have a desktop Qt-based application that fetches a sound stream from the network and plays it using QAudioOutput. I want to provide a volume control to the user so that he can reduce the volume. My code looks like this:
float volume_control = get_user_pref(); // user provided volume level {0.0,1.0}
for (;;) {
AVPacket *retrieved_pkt = get_decoded_packet_stream(); // from network stream
AVPacket *work_pkt
= change_volume(retrieved_pkt, volume_control); // this is what I need
// remaining code to play the work_pkt ...
}
How do I implement change_volume() or is there any off the shelf function that I can use?
Edit: Adding codec-related info as requested in the comments
QAudioFormat format;
format.setFrequency(44100);
format.setChannels(2);
format.setSampleSize(16);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::SignedInt);
The following code works just fine.
// audio_buffer is a byte array of size data_size
// volume_level is a float between 0 (silent) and 1 (original volume)
int16_t * pcm_data = (int16_t*)(audio_buffer);
int32_t pcmval;
for (int ii = 0; ii < (data_size / 2); ii++) { // 16 bit, hence divided by 2
pcmval = pcm_data[ii] * volume_level ;
pcm_data[ii] = pcmval;
}
Edit: I think there is a significant scope of optimization here, since my solution is compute-intensive. I guess avcodec_decode_audio() can be used to speed it up.

How to read the weight from a Weight USB Scale

I have a USB weighing from stamps.com (Model 510: http://www.stamps.com/postage-online/digital-postage-scales/)
I was able to find the drivers to make it stand alone online, but my next question is how do I read the weight of the object on the scale in my classic ASP page / VBScript.
Does anyone have any suggestions where I should begin my search?
I'm not sure if this is applicable to your specific model but there's an article at http://nicholas.piasecki.name/blog/2008/11/reading-a-stamps-com-usb-scale-from-c-sharp/ where the author has written C# code to read from the scale because it conforms to basic USB HID (human input device) standards. The author made use of Mike OBrien's HID library https://github.com/mikeobrien/HidLibrary
They start off getting the raw bytes:
HidDeviceData inData;
HidDevice[] hidDeviceList;
HidDevice scale;
hidDeviceList = HidDevices.Enumerate(0x1446, 0x6A73);
if (hidDeviceList.Length > 0)
{
int waitTries;
scale = hidDeviceList[0];
waitTries = 0;
scale.Open();
if (scale.IsConnected)
{
inData = scale.Read(250);
for (int i = 0; i < inData.Data.Length; ++i)
{
Console.WriteLine("Byte {0}: {1:X}", i, inData.Data[i]);
}
}
scale.Close();
scale.Dispose();
}
Then go on to reverse engineer the payload and construct a function to get the weight in ounces:
private void GetStampsComModel2500iScaleWeight(out decimal? ounces, out bool? isStable)
{
HidDeviceData inData;
HidDevice[] hidDeviceList;
HidDevice scale;
isStable = null;
ounces = null;
hidDeviceList = HidDevices.Enumerate(0x1446, 0x6A73);
if (hidDeviceList.Length > 0)
{
int waitTries;
scale = hidDeviceList[0];
waitTries = 0;
scale.Open();
// For some reason, the scale isn't always immediately available
// after calling Open(). Let's wait for a few milliseconds before
// giving up.
while (!scale.IsConnected && waitTries < 10)
{
Thread.Sleep(50);
waitTries++;
}
if (scale.IsConnected)
{
inData = scale.Read(250);
ounces = (Convert.ToDecimal(inData.Data[4]) +
Convert.ToDecimal(inData.Data[5]) * 256) / 10;
isStable = inData.Data[1] == 0x4;
}
scale.Close();
scale.Dispose();
}
}
In order to read the weight from your classic ASP page/VBScript (on the server, right?) the easiest solution looks to be turning the working C# class into a COM component. There are tutorials you can follow to create the C# COM Component and register it on the server, then you would call it from VBScript like:
Dim app
Set app = Server.CreateObject("MyScaleComponent")

Read from file in playn

Here's a stupid question.
How do you read files in a playn game? I tried using File and Scanner like I usually do in a standard java program:
void readFromFile(){
int x;
int y;
int pixel;
int[][] board;
try{
Scanner scan = new Scanner(new File(in));
x = scan.nextInt();
y = scan.nextInt();
pixel = scan.nextInt();
Point start = new Point(scan.nextInt(), scan.nextInt());
Point dir = new Point(scan.nextInt(), scan.nextInt());
Point end = new Point(scan.nextInt(), scan.nextInt());
int antRoads = scan.nextInt();
board = new int[x][y];
for (int i = 0; i < y; i++){
for (int j = 0; j < x; j++){
board[i][j] = scan.nextInt();
}
}
lev = new Level(board, start, dir, end, antRoads, pixel, x, y);
} catch(FileNotFoundException e){
System.out.println(e);
}
}
I tested File.canRead(), canWrite() and can Execute() and they all returned false.
Am I supposed to use assetMannager().getText() or something? If that's the case can someone tell me how it works? (or what is and how ResourceCallback works?)
My goal is to have a folder named "Maps" filled with maps in regular text-format just like the standard Image folder.
Regards,
Torgeir
You cannot do normal file I/O in a PlayN game, because the games are compiled into JavaScript and run in the browser (when using the HTML5 backend), and the browser supports no file I/O (at least not the general purpose file I/O you would need for these purposes).
Browsers also do not even support the idea of a byte stream, or any sort of binary I/O (this may eventually arrive, but it will be ages before it's supported for all browsers).
So you have to use AssetManager.getText to read data files. You can encode them in JSON if you like and use PlayN.json() to decode them, or you can use your own custom string-based format.
If you don't plan to deploy using the HTML5 backend, you can create a "LevelReader" interface and implement that interface in your Android or iOS backend and make use of the native file I/O capabilities on those platforms.

Checking if a QImage has an alpha channel

I want to know if a QImage I loaded contains an alpha channel. I already know that QImage::hasAlphaChannel() can tell me if the image format I'm using supports alpha channels, but is there a way to know if it's actually being used in the loaded image?
Here you have my snippet for checking if alpha is really used. It's useful when image is in ARGB32.
bool useAlpha = false;
const uchar* pixelData = image.bits();
int bytes = image.byteCount();
for (const QRgb* pixel = reinterpret_cast<const QRgb*>(pixelData); bytes > 0; pixel++, bytes -= sizeof(QRgb)) {
if (qAlpha(*pixel) != UCHAR_MAX) {
useAlpha = true;
break;
}
}
Remember also that there is format() method.
If the format you load the QImage as has an alpha channel, your QImage has an alpha channel.
If you're checking to see if any pixel in an image with an alpha channel actually sets any pixel to something other than opaque, you could try something like generating an alpha mask using QImage::createAlphaMask() and inspecting its pixel values.

Resources