Directshow, format type change filter - directshow

How can I change rcTarget in a filter?
Source and Target Rectangles in Video Renderers
I want, example in free pascal and DSpack.
My project is to work with 720x576 video format. If I can change a filter rcTarget ex .: LAVSplitter pin VIDEO, I solved my problems.
I want to explain my problem:
My project and developed with DSPack and lazarus.
I have to run a media file playlist and add text scrolling.
Output DeckLink card analog or similar. For connect TV monitor means
S-Video.
Now I see in the videowindow desktop and TV monitors connected to DeckLink, only original videos 720x576.
I want all video formats run. msdn site :
Source and Target Rectangles in Video Renderers
describes how to change rcTarget in videoinfoheader. I do not know how to write in pascal, used dspack. or is there another way to resize video?
I have to build a new filter or I can change properties in rcTarget filter example: LAVSplitter?
my graph
| sourcefile| -> 1920x1080 | LAV Splitter | -> 720x576 | LAV Decoder| -> Tee Filter| -> videowindows and DeckLink render
On the internet many examples to resize external device capture webcam,
i do not find example code external device output.
For this I ask for help.
I have an example for MPEG-2, not working. Where I go wrong?
// var
// mt : AM_Media_Type;
// seqHdr : array [0..0] of byte; //this is right?
// pWIH : MPEG2VIDEOINFO;
ZeroMemory(#Mt, sizeof(AM_MEDIA_TYPE));
Mt.MajorType := MEDIATYPE_Video;
Mt.SubType := MEDIASUBTYPE_RGB32;
Mt.FormatType := FORMAT_MPEG2_VIDEO;
Mt.cbFormat := sizeof(MPEG2VIDEOINFO) + sizeof(seqHdr);
mt.pbFormat := CoTaskMemAlloc(mt.cbFormat);
if (mt.pbFormat = NULL) then exit; // ERROR
ZeroMemory(mt.pbFormat, mt.cbFormat);
{ RCSRC.Left := 0;
RCSRC.Top:= 0;
RCSRC.Right := 0;
RCSRC.Bottom := 0;
}
pWIH.hdr.rcSource.Left:=0;
pWIH.hdr.rcSource.Top:=0;
pWIH.hdr.rcSource.Right:=0;
pWIH.hdr.rcSource.Bottom:=0;
// pWIH.hdr.rcSource := RCSRC;
// pWIH.hdr.rcTarget := Rect(0,0,720,576);
pWIH.hdr.rcTarget.Left:=0;
pWIH.hdr.rcTarget.Top:=0;
pWIH.hdr.rcTarget.Right:=576;
pWIH.hdr.rcTarget.Bottom:=720;
pWIH.hdr.AvgTimePerFrame := 278335;
pWIH.hdr.dwPictAspectRatioX := 4;
pWIH.hdr.dwPictAspectRatioY := 3;
pWIH.hdr.bmiHeader.biSize := 40;
pWIH.hdr.bmiHeader.biWidth := 720;
pWIH.hdr.bmiHeader.biHeight := 576;
pWIH.cbSequenceHeader := sizeof(seqHdr);
CopyMemory(#pwih.dwSequenceHeader, #seqHdr, sizeof(seqhdr));
//-------------------------------------
SourceFilter.FindPin('Output',PinOutSource);
(VideoWindow1 as IBaseFilter).FindPin('Input',PIn_input);
PinOutSource.Connect(PIn_input,#mt);

thank you, thank you,thank you.
I want to use ( 3.Use this Resizer DMO filter ).
I created Resize filter, as your example. OK.
Now I want to use: (FResizerDMO as IMediaObject).SetOutputType for resize my video.
I have difficulty, you can help me?
in site alax.info :
CoCreateInstance the DSP as DMO and add it to DMO Wrapper Filter
Use IWMResizerProps::SetFullCropRegion to initialize the DSP // I do not have to crop region?
Connect input pin
Set output type via IMediaObject::SetOutputType
IGraphBuilder::ConnectDirect output pin
This above is correct?
I write this:
var pVIH :VIDEOINFOHEADER; mt :DMO_MEDIA_TYPE;
ZeroMemory(#mt, sizeof(DMO_MEDIA_TYPE));
mt.majortype := MEDIATYPE_Video;
mt.subtype := MEDIASUBTYPE_RGB32;
mt.formattype := FORMAT_VideoInfo;
// ** I can not translate this **
// VIDEOINFOHEADER * pVIH = (VIDEOINFOHEADER *)pmt->pbFormat;
// pVIH := TVideoInfoHeader(mt.pbFormat^); ????
pVIH.bmiHeader.biWidth := 720;
pVIH.bmiHeader.biHeight := 576;
pVIH.bmiHeader.biXPelsPerMeter:=9;
pVIH.bmiHeader.biYPelsPerMeter:=16;
pVIH.bmiHeader.biSizeImage := 720 * 480 * 3;
hr := (FResizerDMO as IMediaObject).SetOutputType(0,#mt,DMO_SET_TYPEF_CLEAR);
if hr <> S_OK then showmessage('error'); // I receive error
correct this road?
if it is correct you can help because it does not work.

Related

NewTek NDI (SDK v5) with Qt6.3: How to display NDI video frames on the GUI?

I have integrated the NDI SDK from NewTek in the current version 5 into my Qt6.3 widget project.
I copied and included the required DLLs and header files from the NDI SDK installation directory into my project.
To test my build environment I tried to compile a simple test program based on the example from "..\NDI 5 SDK\Examples\C++\NDIlib_Recv".
That was also successful.
I was therefore able to receive or access data from my NDI source.
There is therefore a valid frame in the video_frame of the type NDIlib_video_frame_v2_t. Within the structure I can also query correct data of the frame such as the size (.xres and .yres).
The pointer p_data points to the actual data.
So far so good.
Of course, I now want to display this frame on the Qt6 GUI. In other words, the only thing missing now is the conversion into an appropriate format so that I can display the frame with QImage, QPixmap, QLabel, etc.
But how?
So far I've tried calls like this:
curFrame = QImage(video_frame.p_data, video_frame.xres, video_frame.yres, QImage::Format::Format_RGB888);
curFrame.save("out.jpg");
I'm not sure if the format is correct either.
Here's a closer look at the mentioned frame structure within the Qt debug session:
my NDI video frame in the Qt Debug session, after receiving
Within "video_frame" you can see the specification video_type_UYVY.
This may really be the format as it appears at the source!?
Fine, but how do I get this converted now?
Many thanks and best regards
You mean something like this? :)
https://github.com/NightVsKnight/QtNdiMonitorCapture
Specifically:
https://github.com/NightVsKnight/QtNdiMonitorCapture/blob/main/lib/ndireceiverworker.cpp
Assuming you connect using NDIlib_recv_color_format_best:
NDIlib_recv_create_v3_t recv_desc;
recv_desc.p_ndi_recv_name = "QtNdiMonitorCapture";
recv_desc.source_to_connect_to = ...;
recv_desc.color_format = NDIlib_recv_color_format_best;
recv_desc.bandwidth = NDIlib_recv_bandwidth_highest;
recv_desc.allow_video_fields = true;
pNdiRecv = NDIlib_recv_create_v3(&recv_desc);
Then when you receive a NDIlib_video_frame_v2_t:
void NdiReceiverWorker::processVideo(
NDIlib_video_frame_v2_t *pNdiVideoFrame,
QList<QVideoSink*> *videoSinks)
{
auto ndiWidth = pNdiVideoFrame->xres;
auto ndiHeight = pNdiVideoFrame->yres;
auto ndiLineStrideInBytes = pNdiVideoFrame->line_stride_in_bytes;
auto ndiPixelFormat = pNdiVideoFrame->FourCC;
auto pixelFormat = NdiWrapper::ndiPixelFormatToPixelFormat(ndiPixelFormat);
if (pixelFormat == QVideoFrameFormat::PixelFormat::Format_Invalid)
{
qDebug().nospace() << "Unsupported pNdiVideoFrame->FourCC " << NdiWrapper::ndiFourCCToString(ndiPixelFormat) << "; return;";
return;
}
QSize videoFrameSize(ndiWidth, ndiHeight);
QVideoFrameFormat videoFrameFormat(videoFrameSize, pixelFormat);
QVideoFrame videoFrame(videoFrameFormat);
if (!videoFrame.map(QVideoFrame::WriteOnly))
{
qWarning() << "videoFrame.map(QVideoFrame::WriteOnly) failed; return;";
return;
}
auto pDstY = videoFrame.bits(0);
auto pSrcY = pNdiVideoFrame->p_data;
auto pDstUV = videoFrame.bits(1);
auto pSrcUV = pSrcY + (ndiLineStrideInBytes * ndiHeight);
for (int line = 0; line < ndiHeight; ++line)
{
memcpy(pDstY, pSrcY, ndiLineStrideInBytes);
pDstY += ndiLineStrideInBytes;
pSrcY += ndiLineStrideInBytes;
if (pDstUV)
{
// For now QVideoFrameFormat/QVideoFrame does not support P216. :(
// I have started the conversation to have it added, but that may take awhile. :(
// Until then, copying only every other UV line is a cheap way to downsample P216's 4:2:2 to P016's 4:2:0 chroma sampling.
// There are still a few visible artifacts on the screen, but it is passable.
if (line % 2)
{
memcpy(pDstUV, pSrcUV, ndiLineStrideInBytes);
pDstUV += ndiLineStrideInBytes;
}
pSrcUV += ndiLineStrideInBytes;
}
}
videoFrame.unmap();
foreach(QVideoSink *videoSink, *videoSinks)
{
videoSink->setVideoFrame(videoFrame);
}
}
QVideoFrameFormat::PixelFormat NdiWrapper::ndiPixelFormatToPixelFormat(enum NDIlib_FourCC_video_type_e ndiFourCC)
{
switch(ndiFourCC)
{
case NDIlib_FourCC_video_type_UYVY:
return QVideoFrameFormat::PixelFormat::Format_UYVY;
case NDIlib_FourCC_video_type_UYVA:
return QVideoFrameFormat::PixelFormat::Format_UYVY;
break;
// Result when requesting NDIlib_recv_color_format_best
case NDIlib_FourCC_video_type_P216:
return QVideoFrameFormat::PixelFormat::Format_P016;
//case NDIlib_FourCC_video_type_PA16:
// return QVideoFrameFormat::PixelFormat::?;
case NDIlib_FourCC_video_type_YV12:
return QVideoFrameFormat::PixelFormat::Format_YV12;
//case NDIlib_FourCC_video_type_I420:
// return QVideoFrameFormat::PixelFormat::?
case NDIlib_FourCC_video_type_NV12:
return QVideoFrameFormat::PixelFormat::Format_NV12;
case NDIlib_FourCC_video_type_BGRA:
return QVideoFrameFormat::PixelFormat::Format_BGRA8888;
case NDIlib_FourCC_video_type_BGRX:
return QVideoFrameFormat::PixelFormat::Format_BGRX8888;
case NDIlib_FourCC_video_type_RGBA:
return QVideoFrameFormat::PixelFormat::Format_RGBA8888;
case NDIlib_FourCC_video_type_RGBX:
return QVideoFrameFormat::PixelFormat::Format_RGBX8888;
default:
return QVideoFrameFormat::PixelFormat::Format_Invalid;
}
}

Setting eraser type to bitmap for PencilKit (iOS)

Using PencilKit for iOS, how do I set the eraser tool to .bitmap for PKToolPicker?
I can't find any setting for PKToolPicker. Trying to use PKCanvasView to observe and set the tool's eraserType as .bitmap also does not work.
override func toolPickerSelectedToolDidChange(_ toolPicker: PKToolPicker) {
var tool = toolPicker.selectedTool as? PKEraserTool
if tool != nil {
tool?.eraserType = .bitmap
}
}
The PKEraser is a struct, so when you change its eraserType, you're actually modifying a copy of the tool that's being used in the canvas.
What you need to do is simply set the PKCanvasView tool property and it will work.
override func toolPickerSelectedToolDidChange(_ toolPicker: PKToolPicker) {
var tool = toolPicker.selectedTool as? PKEraserTool
if tool != nil {
tool?.eraserType = .bitmap
}
// this line below will do the trick
canvasView.tool = tool
}
Let me know if it worked! 😊
Applies to iOS 13 and iOS 14
To set the toolpicker's selected tool as a bitmap eraser (where toolPicker is the PKToolPicker):
toolPicker?.selectedTool = PKEraserTool(.bitmap)
To set the canvas view's tool to a bitmap eraser (where canvasView is the PKCanvasView):
canvasView.tool = PKEraserTool(.bitmap)
This code, based on your example, will keep the toolpicker's erase tool as bitmap(pixel eraser) even if vector(object eraser) was chosen. (tested on iOS 14)
func toolPickerSelectedToolDidChange(_ toolPicker: PKToolPicker) {
if toolPicker.selectedTool is PKEraserTool {
toolPicker.selectedTool = PKEraserTool(.bitmap)
}
}

how to keep shadow in borderless window

I'm trying to drop a shadow on a borderless window using Qt in windows.
I succeeded in dropping the shadow when launching the application, referring to the following article.
Borderless Window Using Areo Snap, Shadow, Minimize Animation, and Shake
Borderless Window with Drop Shadow
But I encountered the problem that the shadow will disappear if the application is deactivated and reactivated (
In other words, click the other applications, and click my application again.)
Perhaps my implementation is not good enough.
I'm glad if you have some ideas for this issue.
I'm trying to imprement Qt with Go bindings
Here is the code snippet:
package qframelesswindow
import (
"unsafe"
"github.com/therecipe/qt/core"
"github.com/therecipe/qt/widgets"
win "github.com/akiyosi/w32"
)
func (f *QFramelessWindow) SetNativeEvent(app *widgets.QApplication) {
filterObj := core.NewQAbstractNativeEventFilter()
filterObj.ConnectNativeEventFilter(func(eventType *core.QByteArray, message unsafe.Pointer, result int) bool {
msg := (*win.MSG)(message)
lparam := msg.LParam
hwnd := msg.Hwnd
var uflag uint
uflag = win.SWP_NOZORDER | win.SWP_NOOWNERZORDER | win.SWP_NOMOVE | win.SWP_NOSIZE | win.SWP_FRAMECHANGED
var nullptr win.HWND
shadow := &win.MARGINS{0, 0, 0, 1}
switch msg.Message {
case win.WM_CREATE:
style := win.WS_POPUP | win.WS_THICKFRAME | win.WS_MINIMIZEBOX | win.WS_MAXIMIZEBOX | win.WS_CAPTION
win.SetWindowLong(hwnd, win.GWL_STYLE, uint32(style))
win.DwmExtendFrameIntoClientArea(hwnd, shadow)
win.SetWindowPos(hwnd, nullptr, 0, 0, 0, 0, uflag)
return true
case win.WM_NCCALCSIZE:
if msg.WParam == 1 {
// this kills the window frame and title bar we added with WS_THICKFRAME and WS_CAPTION
result = 0
return true
}
return false
case win.WM_GETMINMAXINFO:
mm := (*win.MINMAXINFO)((unsafe.Pointer)(lparam))
mm.PtMinTrackSize.X = int32(f.minimumWidth)
mm.PtMinTrackSize.Y = int32(f.minimumHeight)
return true
default:
}
return false
})
app.InstallNativeEventFilter(filterObj)
}
All source code is in my repository;
akiyosi/goqtframelesswindow
WM_NCCALCSIZE:
If wParam is TRUE, the application should return zero or a combination
of the following values.(In document)
And also:
When wParam is TRUE, simply returning 0 without processing the
NCCALCSIZE_PARAMS rectangles will cause the client area to resize to
the size of the window, including the window frame. This will remove
the window frame and caption items from your window, leaving only the
client area displayed.
Starting with Windows Vista, simply returning 0 does not affect extended frames, only the standard frame will be removed.
EDIT:
Set the return value with the DWL_MSGRESULT instead of result = 0.

How to display progress of the TFDScript execution using TProgressBar?

I have some script in some file "MyScript.sql"
On the form I have my TProgressBar.
I want to read script with TFDScript and move progressbar according to the script.
My code is
Var
Lista: TStringList; // SQL DDL list for creating table and populate table
I: Integer;
Begin
With FDConn Do //FDConn is my FaireDac connection
Begin
LoginPrompt := False;
With Params Do
Begin
Clear;
DriverID := 'SQLite';
Database := 'MyDatabase.sdb';
LoginPrompt := False;
End;
Lista := TStringList.Create;
Lista.Clear;
Try
FDScript.ValidateAll; //FDScript is TFDScript and prgBar is TProgressBar
prgBar.Max := FDScript.TotalJobSize - 1;
prgBar.Update;
Lista.Clear;
Lista.LoadFromFile('MyScript.sql');
// Now how I can read script 1 line by 1 line and move progress bar with
prgBar.StepIt;
prgBar.Update;`
You can handle the OnProgress event and read there e.g. TotalJobSize property to determine the number of bytes to proceed and TotalJobDone to get number of bytes processed. For example:
procedure TForm1.FDScript1Progress(Sender: TObject);
begin
ProgressBar1.Max := TFDScript(Sender).TotalJobSize;
ProgressBar1.Position := TFDScript(Sender).TotalJobDone;
end;
If you were having progress bar control with progress value setup by percentage, you'd better read the TotalPct10Done property.

SHGetFileInfo produces icon with black background

i have a problem with SHGetFileInfo. I am using FPC 2.6.2 with Lazarus 1.0.14, here is the code:
procedure x;
var
FI: SHFILEINFO;
icon: ticon;
begin
SHGetFileInfo('app.exe', 0, FI, SizeOf(FI), SHGFI_SYSICONINDEX or SHGFI_ICON or SHGFI_LARGEICON);
icon := TIcon.Create;
icon.Handle := FI.hIcon;
icon.SaveToFile('extracted.ico');
end;
The problem is it produces icon file with black background instead od transparent. Here is how it looks like:
http://i.imgur.com/5BF3xbT.jpg
When i compile the same code in Delphi, it works perfectly. Icon has transparent background.
I would appreciate if anyone could help me to solve this problem :-)
I have same problem time ago. LCL seems can't have full alpha support for TIcon, so you must use another similar component. I tried TKIcon and it works. You can find here http://www.tkweb.eu/en/delphicomp/kicon.html
I post a sample procedure to extract a icon. It is very simple.
procedure ExtractIconAndSave(xpath: string);
var
FileInfo: SHFILEINFO;
Icon: KIcon.TIcon; //Don't confused with Graphics.TIcon
begin
//Get icon handle
SHGetFileInfo(PChar(xpath), 0, FileInfo, SizeOf(FileInfo), SHGFI_SYSICONINDEX or SHGFI_ICON or SHGFI_LARGEICON);
//Check if SHGetFileInfo get the icon handle
if FileInfo.hIcon <> 0 then
begin
//Use kIcon's TIcon - It supports alpha 32bpp
Icon := KIcon.TIcon.Create;
try
//Load icon handle in TKIcon and save it in a file
Icon.LoadFromHandle(FileInfo.hIcon);
Icon.SaveToFile('extracted.ico');
finally
DestroyIcon(FileInfo.hIcon);
FreeAndNil(Icon);
end;
end;
end;

Resources