How to send IMFSample to EVR Media Sink - ms-media-foundation

I want to use EVR standalone, but i failed sending IMFSample to it. codes list below,
//create the video render
IMFActivate* pActive = NULL;
hr = MFCreateVideoRendererActivate(m_hWnd, &pActive);
CHECK_HR(hr);
hr = pActive->ActivateObject(IID_IMFMediaSink,(void**)&m_pVideoSink) ;
CHECK_HR(hr);
hr = m_pVideoSink->GetStreamSinkByIndex(0,&m_pVideoStreamSink) ;
CHECK_HR(hr);
//on Sample ready from a custom mft
hr = m_pVideoStreamSink->ProcessSample(pSample) ;
then i got an E_NOTIMPL error. After several hours struggles, i implemented IMFVideoSampleAllocator:
//get IMFVideoSampleAllocator service
hr = MFGetService(m_pVideoStreamSink,MR_VIDEO_ACCELERATION_SERVICE,IID_PPV_ARGS(&m_pAllocator)) ;
CHECK_HR(hr);
//init IMFVideoSampleAllocator,pType is the negotiated type
hr = m_pAllocator->InitializeSampleAllocator(20,pType) ;
//On sample ready,pSample is the IMFSample from mft
IMFSample* pVideoSample = NULL ;
IMFMediaBuffer* pBuffer = NULL ;
LONGLONG hnsTimeStamp = 0 ;
//copy sample data from pSample to pVideoSample
CHECK_HR(hr = m_pAllocator->AllocateSample(&pVideoSample)) ;
CHECK_HR(hr = pSample->GetSampleTime(&hnsTimeStamp)) ;
CHECK_HR(hr = pVideoSample->SetSampleTime(hnsTimeStamp)) ;
CHECK_HR(hr = pSample->GetBufferByIndex(0,&pBuffer)) ;
CHECK_HR(hr = pVideoSample->AddBuffer(pBuffer)) ;
hr = m_pVideoStreamSink->ProcessSample(pVideoSample) ;
now, every thing works great, but i got only a black screen with no any movie picture drawn on it!
besides, i had added SAR to my code, it worked pretty good.
any help, thx!

Maybe a little late to answer to your question, but anyway ...
I was in a similar situation and I solved it by using a Stream Reader configured with MF_SOURCE_READER_D3D_MANAGER. I took the IDirect3DDeviceManager9 from the Stream Sink the same way you took the allocator:
hr = MFGetService(m_pVideoStreamSink,MR_VIDEO_ACCELERATION_SERVICE,IID_PPV_ARGS(&pD3DManager);
and set it as IUnknown to MF_SOURCE_READER_D3D_MANAGER attribute above.
If you cannot use IMFSourceReader then maybe this link will be helpful:
https://code.google.com/p/webrtc4all/source/browse/trunk/gotham/MFT_WebRTC4All/test/test_evr.cc?r=15

When the pVideoSample is allocated, it already has a buffer for your use; you don't need to add any other buffers.
In your case, my guess is that the originally allocated buffer was used to render the output - which is this case is empty, and hence there's no image.

Related

Video broadcast using NDI SDK 4.5 in iOS 13 not working. Receiver in LAN does not receive any video packets

I have been trying to use NDI SDK 4.5, in a Objective-C iOS-13 app, to broadcast camera capture from iPhone device.
My sample code is in public Github repo: https://github.com/bharatbiswal/CameraExampleObjectiveC
Following is how I send CMSampleBufferRef sampleBuffer:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = VIDEO_CAPTURE_WIDTH;
video_frame.yres = VIDEO_CAPTURE_HEIGHT;
video_frame.FourCC = NDIlib_FourCC_type_UYVY; // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange
video_frame.line_stride_in_bytes = VIDEO_CAPTURE_WIDTH * VIDEO_CAPTURE_PIXEL_SIZE;
video_frame.p_data = CVPixelBufferGetBaseAddress(pixelBuffer);
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame);
I have been using "NewTek NDI Video Monitor" to receive the video from network. However, even though it shows as source, the video does not play.
Has anyone used NDI SDK in iOS to build broadcast sender or receiver functionalities? Please help.
You should use kCVPixelFormatType_32BGRA in video settings. And NDIlib_FourCC_type_BGRA as FourCC in NDIlib_video_frame_v2_t.
Are you sure about your VIDEO_CAPTURE_PIXEL_SIZE ?
When I worked with NDI on macos I had the same black screen problem and it was due to a wrong line stride.
Maybe this can help : https://developer.apple.com/documentation/corevideo/1456964-cvpixelbuffergetbytesperrow?language=objc ?
Also it seems the pixel formats from core video and NDI don't match.
On the core video side you are using Bi-Planar Y'CbCr 8-bit 4:2:0, and on the NDI side you are using NDIlib_FourCC_type_UYVY which is Y'CbCr 4:2:2.
I cannot find any Bi-Planar Y'CbCr 8-bit 4:2:0 pixel format on the NDI side.
You may have more luck using the following combination:
core video: https://developer.apple.com/documentation/corevideo/1563591-pixel_format_identifiers/kcvpixelformattype_420ypcbcr8planarfullrange?language=objc
NDI: NDIlib_FourCC_type_YV12
Hope this helps!
In my experience, you have two mistake. To use CVPixelBuffer's CVPixelBufferGetBaseAddress, the CVPixelBufferLockBaseAddress method must be called first. Otherwise, it returns a null pointer.
https://developer.apple.com/documentation/corevideo/1457128-cvpixelbufferlockbaseaddress?language=objc
Secondly, NDI does not support YUV420 biplanar. (The default format for iOS cameras.) More precisely, NDI only accepts one data pointer. In other words, you have to merge the biplanar memory areas into one, and then pass it in NV12 format. See the NDI document for details.
So your code should look like this: And if sending asynchronously instead of NDIlib_send_send_video_v2, a strong reference to the transferred memory area must be maintained until the transfer operation by the NDI library is completed.
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
int width = (int)CVPixelBufferGetWidth(pixelBuffer);
int height = (int)CVPixelBufferGetHeight(pixelBuffer);
OSType pixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer);
NDIlib_FourCC_video_type_e ndiVideoFormat;
uint8_t* pixelData;
int stride;
if (pixelFormat == kCVPixelFormatType_32BGRA) {
ndiVideoFormat = NDIlib_FourCC_type_BGRA;
pixelData = (uint8_t*)CVPixelBufferGetBaseAddress(pixelBuffer); // Or copy for asynchronous transmit.
stride = width * 4;
} else if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
ndiVideoFormat = NDIlib_FourCC_type_NV12;
pixelData = (uint8_t*)malloc(width * height * 1.5);
uint8_t* yPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int yPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0);
int ySize = yPlaneBytesPerRow * height;
uint8_t* uvPlane = (uint8_t*)CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1);
int uvPlaneBytesPerRow = (int)CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1);
int uvSize = uvPlaneBytesPerRow * height;
stride = yPlaneBytesPerRow;
memcpy(pixelData, yPlane, ySize);
memcpy(pixelData + ySize, uvPlane, uvSize);
} else {
return;
}
NDIlib_video_frame_v2_t video_frame;
video_frame.xres = width;
video_frame.yres = height;
video_frame.FourCC = ndiVideoFormat;
video_frame.line_stride_in_bytes = stride;
video_frame.p_data = pixelData;
NDIlib_send_send_video_v2(self.my_ndi_send, &video_frame); // synchronous sending.
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
// For synchrnous sending case. Free data or use pre-allocated memory.
if (pixelFormat == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
free(pixelData);
}

JavaFx media bytes

I want to create a soundwave in my java programm from an mp3 file. I researched and found out, that for wav-files I need to use the AudioInputStream and calculate an byte array... From mp3-File I am using JavaFX media and media-player. Are the bytes from the Inputstream the same like from the Javafx media.getSource().getBytes(); ? An AudioInputStream cant read mp3...
Or how am I supposed to get the values for an mp3 file for soundwave?
Byte from AudioInputStream:
AudioInputStream audioInputStream;
try {
audioInputStream = AudioSystem.getAudioInputStream(next);
int frameLength = (int) audioInputStream.getFrameLength();
int frameSize = (int) audioInputStream.getFormat().getFrameSize();
byte[] bytes = new byte[frameLength * frameSize];
g2.setColor(Color.MAGENTA);
for(int p = 0; p < bytes.length; p++){
g2.fillRect(20 + (p * 3), 50, 2, bytes[p]);
}
} catch (UnsupportedAudioFileException | IOException e) {
e.printStackTrace();
}
And from JavaFX:
Media media;
MediaPlayer player;
media = new Media("blablafile");
player = new Mediaplayer(media);
byte[] bytes = media.getSource().getBytes();
The JavaFX Media API does not provide much low-level support as of Java 10. It seems to be designed with only the necessary features to play media, not manipulate it significantly.
That being said, you might want to look at AudioSpectrumListener. I can't promise it will give you what you want (I'm not familiar with computer-audio concepts) but it may allow you to create your sound-wave; at least a crude representation.
You use an AudioSpectrumListener with a MediaPlayer using the corresponding property.
If your calculations don't have to be in real time then you can do them ahead of time using:
byte[] bytes = URI.create(media.getSource()).toURL().openStream().readAllBytes();
Note that if the media is remote, however, that you will end up downloading the bytes twice; once to get the bytes for your sound-wave and again when actually playing the media with a MediaPlayer.
Also, you'll want to do the above on a background thread and not the JavaFX Application thread to avoid the possibility of freezing the UI.

Add two media sources in topology and display mix video - Windows Media Foundation

I am exploring windows media foundation.
I want to mix and display two video stream in one window.
I am following few of the sample provided by MS.
I am trying to Add multiple media sources to topology, I want to add two media files to topology.
As per the below link, I am following code to add media source in topology:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms701605(v=vs.85).aspx
Below is the code to add source node to the topology:
HRESULT hr = pPD->GetStreamDescriptorByIndex(iStream, &fSelected, &pSD);
if (FAILED(hr))
{
goto done;
}
if (fSelected)
{
// Create the media sink activation object.
hr = CreateMediaSinkActivate(pSD, hVideoWnd, &pSinkActivate);
if (FAILED(hr))
{
goto done;
}
// Add a source node for this stream.
hr = AddSourceNode(pTopology, pSource, pPD, pSD, &pSourceNode);
if (FAILED(hr))
{
goto done;
}
// Create the output node for the renderer.
hr = AddOutputNode(pTopology, pSinkActivate, 0, &pOutputNode);
if (FAILED(hr))
{
goto done;
}
// Connect the source node to the output node.
hr = pSourceNode->ConnectOutput(0, pOutputNode, 0);
}
But I am not able to add multiple media source in my topology.
My single file playback is working properly but I am not able to mix and display two files.
I recommend split your task on five step:
1. write code for playing ONE video file. On MSDN there is example code: How to Play Media Files with Media Foundation.
2. research code of WORKABLE player for point, where created MediaSource from video file path(URL).
3. create TWO MediaSource from two video file paths(URL).
4. by function MFCreateAggregateSource create from TWO MediaSource - ONE MediaSource and retuurn MediaSource from method player HRESULT CreateMediaSource(PCWSTR sURL, IMFMediaSource **ppSource)
5. call 'hr = AddOutputNode(pTopology, pSinkActivate, 0, &pOutputNode);' twice: 'hr = AddOutputNode(pTopology, pSinkActivate, 0, &pOutputNode);' for first video stream and hr = AddOutputNode(pTopology, pSinkActivate, 1, &pOutputNode); for second video stream.
Regards.
P.S. If you will use two video with audio streams, then You will have FOUR streams in Aggregate MediaSource - it could need FIND the stream ID for video stream.
P.S.S It is not easy recommend from viewing only demo code, but in CreateMediaSinkActivate you will find code hr = MFCreateVideoRendererActivate(hVideoWindow, &pActivate);. In your code, you must create this Activate before:
// For each stream, create the topology nodes and add them to the topology.
for (DWORD i = 0; i < cSourceStreams; i++)
{
hr = AddBranchToPartialTopology(pTopology, pSource, pPD, i, hVideoWnd);
if (FAILED(hr))
{
goto done;
}
}
then set this crated Activate as argument for AddBranchToPartialTopology
for example:
hr = MFCreateVideoRendererActivate(hVideoWindow, &pVideoRendererActivate);
// For each stream, create the topology nodes and add them to the topology.
for (DWORD i = 0; i < cSourceStreams; i++)
{
hr = AddBranchToPartialTopology(pTopology, pSource, pPD, i, pVideoRendererActivate);
if (FAILED(hr))
{
goto done;
}
}
In AddBranchToPartialTopologyyou must write something like this:
HRESULT AddBranchToPartialTopology(
IMFTopology *pTopology, // Topology.
IMFMediaSource *pSource, // Media source.
IMFPresentationDescriptor *pPD, // Presentation descriptor.
DWORD iStream, // Stream index.
IMFActivate* aVideoRendererActivate) // VideoRenderer for video playback.
{
IMFStreamDescriptor *pSD = NULL;
IMFActivate *pSinkActivate = NULL;
IMFTopologyNode *pSourceNode = NULL;
IMFTopologyNode *pOutputNode = NULL;
BOOL fSelected = FALSE;
HRESULT hr = pPD->GetStreamDescriptorByIndex(iStream, &fSelected, &pSD);
if (FAILED(hr))
{
goto done;
}
DWORD iStreamID = 0;
if (fSelected)
{
// Create the media sink activation object.
hr = CreateMediaSinkActivate(pSD, iStreamID, aVideoRendererActivate, &pSinkActivate);
In 'CreateMediaSinkActivate' you must write something like this:
DWORD globalVideoIndex = 0;
HRESULT CreateMediaSinkActivate(
IMFStreamDescriptor *pSourceSD, // Pointer to the stream descriptor.
DWORD& iStreamID, // ctream index
IMFActivate *pVideoRendererActivate, // Handle to the video renderer activate.
IMFActivate **ppActivate
)
{
IMFMediaTypeHandler *pHandler = NULL;
IMFActivate *pActivate = NULL;
// Get the media type handler for the stream.
HRESULT hr = pSourceSD->GetMediaTypeHandler(&pHandler);
if (FAILED(hr))
{
goto done;
}
// Get the major media type.
GUID guidMajorType;
hr = pHandler->GetMajorType(&guidMajorType);
if (FAILED(hr))
{
goto done;
}
// Create an IMFActivate object for the renderer, based on the media type.
if (MFMediaType_Audio == guidMajorType)
{
// Create the audio renderer.
hr = MFCreateAudioRendererActivate(&pActivate);
}
else if (MFMediaType_Video == guidMajorType)
{
// Share the video renderer.
hr = pVideoRendererActivate->QueryInterface(IID_PPV_ARG(pActivate))
iStreamID = globalVideoIndex++;
}
else
{
// Unknown stream type.
hr = E_FAIL;
// Optionally, you could deselect this stream instead of failing.
}
if (FAILED(hr))
{
goto done;
}
// Return IMFActivate pointer to caller.
*ppActivate = pActivate;
(*ppActivate)->AddRef();
done:
SafeRelease(&pHandler);
SafeRelease(&pActivate);
return hr;
}
In AddBranchToPartialTopology you must write:
// Create the output node for the renderer.
hr = AddOutputNode(pTopology, pSinkActivate, iStreamID, &pOutputNode);
if (FAILED(hr))
{
goto done;
}
for Audio streams iStreamID will zero, but for video stream it will increment from global variable globalVideoIndex.
Idea is that code create Activate for video renderer BEFORE create Topology - it is OK. Then this ONE video renderer activate share by ref pointer between ALL video streams in MediaSource by checking condition if (MFMediaType_Video == guidMajorType). Each VIDEO stream has got unique id from 0 by incrementing global variable globalVideoIndex++ - this id set in method hr = AddOutputNode(pTopology, pSinkActivate, iStreamID, &pOutputNode);. As a result, all video streams will be drawn by ONE video renderer, and video stream with iStreamID is 0 will reference background, while other video streams will additional.

How to receive data properly from Mbus in C#?

I have really terrible problem that makes me almost sick. For 2-3 days, I've been dealing with this protocol issue and i find myself here to get some help from you guys. I hope I'll be solving. Thanks in advance. I had code in Vb that uses Old MsComm Library. So I decided to change all stuff with C#. I'm okey with opening , closing port and sending data etc.
In Vb; I have the following part of code which is for receiving data from Mbus driver via RS485. Once you sent this it responses you to obtain data. It works and no problem.
Dim SendData(19) As Byte
Dim sending As String
SendData(0) = &HFA
SendData(1) = Mid(DriverNo, 1, 2)
SendData(2) = Mid(DriverNo, 3, 2)
SendData(3) = Mid(DriverNo, 5, 2)
SendData(4) = Mid(DriverNo, 7, 2)
SendData(5) = 210
SendData(6) = CheckSum_Temass(5)
SendData(7) = &HFB
sending = ""
For i = 0 To 7
sending= sending + Chr(SendData(i))
Next
SP.Output = sending
So , the code above is working fine in Vb and Vb.Net. However when I convert it to C# like the following ; I cant get response from mbus driver. While sending data via RS485, I can see that yellow led fires. Normally while receiving data, you can see that red led also fires. The Code in C# ;
string sending= "";
byte[] SendData = new byte[8];
SentData[0] = 0xfa;
SendData[1] = Convert.ToByte((Strings.Mid(DriverNo, 1, 2)));
SendData[2] = Convert.ToByte((Strings.Mid(DriverNo, 3, 2)));
SendData[3] = Convert.ToByte((Strings.Mid(DriverNo, 5, 2)));
SendData[4] = Convert.ToByte((Strings.Mid(DriverNo, 7, 2)));
SendData[5] = 210
SendData[6] = CheckSum_Temass(5);
SendData[7] = 0xfb;
for (int i = 0; i <= 7; i++)
{
sending= sending+ ((char)SendData[i]);
}
sp.Write(sending);
I cant see any problem with this but Vb Code works and C# does not.
In c# , the following is the part of my open port function ;
sp.PortName = portName;
sp.BaudRate = baudRate;
sp.DataBits = databits;
sp.Parity = parity;
sp.StopBits = StopBits.One;//stopBits;
sp.PinChanged += SerialPinChangedEventHandler1;
sp.ErrorReceived += SerialErrorReceivedEventHandler1;
sp.DataReceived += new SerialDataReceivedEventHandler(DataReceived);
sp.ReadTimeout = 1000;
sp.WriteTimeout = 1000;
Everything works fine. I can see as I said the flow of data through Mbus via RS485. I can see it from TX led which fires all the time I send data.However, as i said again, RX led does not fires.
I solved the problem which was relasted to parity. In default it is none but in my system it was supposed to be even. So I can receive data right now but the problem is now speed of data.
In Vb. I'm using valveopen function forthtimes to open valve.Therefore I code like ;
valveopen();
valveopen();
valveopen();
valveopen();
However in C# it is like god knows how many times it will operate :). Arbitrarily I'm able to open valve right now. All stuff are the same and no problem. I think data transfer speed of MSComn and Serial Port are different.
I solved the problem by flushing read and send data.

Qt using threadpools, unable to recieve all the data at once in readyRead()

I'm a newbie in QT and C++, I'm trying to create a QTcpserver using QThreadpools so it can handle multiple clients. Multiple clients are able to connect without any issues. But I'm trying to send an image from an android phone, with a footer "IMGPNG", indicating the end of image data. Now the issue when the readyRead signal is emitted I'm tring to read all the data available data and then perform some string operation later and reconstruct the image. I'm not sure how to receive the complete image for each client and then process it accordingly.
void VireClients::readyRead()//read ready
{
int nsize = socket->bytesAvailable();//trying to check the available bytes
qDebug()<< "Bytes Available" << nsize;
while(socket->bytesAvailable() < nsize){
QByteArray data = socket->readAll();//how to receive all the data and then process it
}
/*!These lines call the threadpool instance and reimplement run*/
imageAnalysis = new VireImageAnalysis(); //creating a new instance of the QRunnable
imageAnalysis->setAutoDelete(true);
connect(imageAnalysis,SIGNAL(ImageAnalysisResult(int)),this,SLOT(TaskResult(int)),Qt::QueuedConnection);
QThreadPool::globalInstance()->start(imageAnalysis);
}
Now i'm not sure how to get the data completely or save the received data in an image format. i want to know how to completely receive the image data. Please help.
A call to readAll() will not always read the complete image as it obviously cannot know the size of the image. It will only read all currently available bytes which might be less than your whole file, or more if the sender is really fast and you cannot catch up reading. The same way readyRead() only informs you that there are bytes available but not that a whole file has been received. It could be a single byte or hundreds of bytes.
Either you know the size of your image in the first place because it is always fixed or the sender has to tell the receiver the number of bytes he wants to sent.
Then you can either just ignore all readyRead() signals until bytesAvailable() matches your image size and call readAll() to read the whole image at once. Or you read whenever there are available bytes and fill up your buffer until the number of bytes read matches the bytes the receiver told you he will send.
Solved saving image issue by collecting, the string in temp variable and finally, used opencv imwrite to save the image, this solved this issue:
while(iBytesAvailable > 0 )
{
if(socket->isValid())
{
char* pzBuff = new char[iBytesAvailable];
int iReadBytes = socket->read(pzBuff, iBytesAvailable);
if( iReadBytes > 0 )
{
result1 += iReadBytes;
str += std::string(reinterpret_cast<char const *>(pzBuff), iReadBytes);
if(str.size() > 0){
search = str.find("IMGPNG");
if(search == result1-6){
finalID = QString::fromStdString(str);
Singleton_Global *strPtr = Singleton_Global::instance();
strPtr->setResult(finalID);
/*!Process the received image here*/
SaveImage= new VSaveImage();
SaveImage->setAutoDelete(false);
connect(SaveImage,SIGNAL(SaveImageResult(QString)),this,SLOT(TaskResult(QString)),Qt::QueuedConnection);
threadPool->start(SaveImage);
}
}
}
Finally did the image saving on the run method -->SaveImage, #DavidSchwartz you were a great help thanks. Thanks all for your help.

Resources