This seems too simple, I must be overlooking something?
How do I find the native video size or aspect ratio from a video file being displayed by a QMediaPlayer?
The video Resolution, PixelAspectRatio, etc., should be in the MetaData, but I wait for MetaData Update Signals, and wait for seconds after the video .play()s, but isMetaDataAvailable() always returns false, and .availableMetaData() and .metaData(QMediaMetaData::Resolution).toSize() always return empty.
There seems to be nowhere else to get the video resolution information, or am I missing something?
I can open the video, play the video at full screen, etc.
You can use QVideoWidget instance as video output for QMediaPlayer and retrieve native size of video from QVideoWidget::sizeHint.
QSize MyVideoPlayer::getVideoNativeSize(const QString& videoFilePath)
{
m_mediaPlayer = new QMediaPlayer(0, QMediaPlayer::VideoSurface);
m_mediaPlayer->setVideoOutput(m_videoWidget);
m_mediaPlayer->setMedia(QUrl::fromLocalFile(videoFilePath));
connect(m_mediaPlayer, SIGNAL(mediaStatusChanged(QMediaPlayer::MediaStatus)),
this, SLOT(OnMediaStatusChanged(QMediaPlayer::MediaStatus)));
m_isStoppingVideo = false;
QEventLoop loop;
m_mediaPlayer->play();
while (!m_isStoppingVideo)
{
loop.processEvents();
}
disconnect(m_mediaPlayer, SIGNAL(mediaStatusChanged(QMediaPlayer::MediaStatus)),
this, SLOT(OnMediaStatusChanged(QMediaPlayer::MediaStatus)));
m_mediaPlayer->stop();
return m_videoWidget->sizeHint();
}
void MyVideoPlayer::OnMediaStatusChanged(QMediaPlayer::MediaStatus mediaStatus)
{
if (mediaStatus == QMediaPlayer::BufferedMedia)
{
m_isStoppingVideo = true;
}
}
For finding the resolution without metadata, you can take a look at this question from the Qt Forums for a possible solution:
http://forum.qt.io/topic/31278/solved-get-resolution-of-a-video-file-40-qmediaplayer-41/2
I solved my problem by waiting until the user plays the video and as
soon as they do so i get the QGraphicsVideoItems class property:
nativeSize.
I also solved this problem with QGraphicsVideoItems nativeSize property. But the tricky thing is that nativeSize becomes valid only after some time since you start playing video. The trick is to connect to special QGraphicsVideoItem::nativeSizeChanged(const QSizeF &size) signal that is emitted in case of real nativeSize obtainment.
Related
My problem is: text does not display on canvas
SKPaint _paint = new SKPaint();
using(Stream stream = _assembly.GetManifestResourceStream(fontPath))
{
_paint.Typeface = SKTypeface.FromStream(stream);
}
canvas.DrawText ("12345",X,Y,_paint);
If I don't use "using block" or "dispose", it works like a charm.
But if I do, before it reaches canvas.DrawText, _paint does not null but all values in FontMetrics, FontSpacing or TextSize equal 0.
Then there is no text on canvas.
I'm so confused, could you give me some advice, please ?
Thank you.
The reason this is happening is that you are disposing the stream and then trying to use it. SKTypeface takes ownership of the stream and you must not close it yourself. Ass soon as the typeface is no longer in use, it may decide to close the stream.
The correct way to use this is to open the stream, pass it to SKTypeface and then just worry about the typeface:
SKPaint _paint = new SKPaint();
Stream stream = _assembly.GetManifestResourceStream(fontPath);
_paint.Typeface = SKTypeface.FromStream(stream);
canvas.DrawText ("12345", X, Y,_paint);
https://developer.xamarin.com/api/member/SkiaSharp.SKTypeface.FromStream/p/System.IO.Stream/System.Int32/
Returns a new typeface given a stream. Ownership of the stream is transferred, so the caller must not reference it again.
I am showing a image in qt label. Below is my code:
void MyClass::onPushButtonClicked(QString myurl)
{
this->setCursor(Qt::WaitCursor);
ui.qtImageLabel->clear();
qDebug()<<QTime::currentTime()<<"MyClass: onPushButtonClicked";
QNetworkAccessManager *qnam_push_button_clicked_show_image;
QNetworkReply *reply;
QNetworkRequest request;
request.setHeader( QNetworkRequest::ContentTypeHeader, "application/x-www-form-urlencoded" );
QUrl url(myurl);
request.setUrl(url);
qnam_push_button_clicked_show_image = new QNetworkAccessManager(this);
if(qnam_push_button_clicked_show_image)
{
QObject::connect(qnam_push_button_clicked_show_image, SIGNAL(finished(QNetworkReply*)),
this, SLOT(onPushButtonClickedRequestCompleted(QNetworkReply*)));
reply = qnam_push_button_clicked_show_image->post(request, url.encodedQuery());
QEventLoop loop;
QObject::connect(reply, SIGNAL(finished()), &loop, SLOT(quit()));
loop.exec();
}
}
void MyClass::onPushButtonClickedRequestCompleted(QNetworkReply *reply)
{
qDebug()<<QTime::currentTime()<<"MyClass: onPushButtonClickedRequestCompleted request completed";
if (reply->error() != QNetworkReply::NoError)
{
qDebug() << "Error in" << reply->url() << ":" << reply->errorString();
this->setCursor(Qt::ArrowCursor);
return;
}
QByteArray data = reply->readAll();
QPixmap pixmap;
pixmap.loadFromData(data);
int width;
int height;
//application size can be changed
QRect rec = QApplication::desktop()->screenGeometry();
height = rec.height();
width = rec.width();
qDebug()<<QTime::currentTime()<<width<<","<<height;
QSize *size = new QSize(width,height);
if(size)
{
QPixmap scaledPixmap = pixmap.scaled(*size);
ui.qtImageLabel->setPixmap(scaledPixmap);
}
if(size)
{
delete size;
size = NULL;
}
data.clear();
this->setCursor(Qt::ArrowCursor);
reply->deleteLater();
return;
}
On clicking push button It will send a request to server and will show a different image received from server. It is working fine if it does't exceeds 500 times. If it exceeds that first this error has been shown
QPixmap::scaled: Pixmap is a null pixmap
and it doesn't show the image. Then if someone again sends a request for an image then it shows the following error:
Qt has caught an exception thrown from an event handler. Throwing
exceptions from an event handler is not supported in Qt. You must
re implement QApplication::notify() and catch all exceptions there.
I am not getting what is the error in the above code. Can someone please tell me how to solve this?
The obvious leak is qnam_push_button_clicked_show_image = new QNetworkAccessManager(this);, which doesn't have a balanced delete anywhere. QNAMs should typically created once, then reused for the lifetime of the application rather than created for a single request. So by turning qnam_push_button_clicked_show_image in a class member (same as ui) you'll fix both your leak and improve the efficiency of the code.
That said, I don't think that's what causes your QPixmap error. If you're running this code on X11, then QPixmap is backed by an X Pixmap resource, which is limited by various factors (software and hardware). Even though from your code there's no obvious leak, it could be that repeatedly allocating large pixmaps slowly fragments the memory pool managed by X, up to the point where it can't allocate a block large enough for the scaled pixmap and then triggers the error. Or it could be a driver bug somewhere in the graphics stack. Have you tried if changing the scaled size increases or decreases the limit before it starts breaking? If so, switching to QImage might help relieving the pressure on X.
Aside from that, the code could use some cleanup, especially that superfluous QEventLoop usage. I'm guessing it's a way to prevent the button from being clicked several times until the new image has been loaded, but I'd much rather implement this using button.setEnabled(false) while the image is downloading, because nested event loops combined with network events is a recipe for countless reentrancy issues and hard to debug crashes/bugs.
I'm also confused about why size is allocated on the heap , especially when it's deleted right after, and these if (size) are really confusing, as they can be understood as if (size->isValid()) while what they really mean is if (size != nullptr), which is pretty much guaranteed as the chance of getting an OOM on that line is infinitesimally low. (if you did eventually run out of memory, my guess is it would likely happen in the readAll() or loadFromData() calls above).
ps: good luck pressing that button another 500 times to check if fixing the leak helped ;)
I'm currently working on a web site that will show kind a image gallery on some detail pages. It must show a navigation at the bottom with small thumbnail images and it must show per each element some basic information and the big image.
The big image must be resized too, because there is a maximun size allowed for them.
The point is to use just a source image per multimedia component and being able to resize the images on publishing time so, from the source image would be sent to the client browser a thumbnail and a big image. It's possible to show small and big images using just styles or HTML, but this is quite uneficient because the source (some of them really heavy) image is always sent to the customer.
My first thought was a custom code fragment, something written in C# but I find complicated to resize only some images to a certain size and then resize them again to another size too. I don't find the way to replace the SRC on the final HTML with the appropiate paths neither.
Another idea was to create an old-style PublishBinary method but I find this really complex because looks like the current Tridion architecture is not meant to do this at all...
And the most important point, even in case we can do the resizing succesfully (somehow) it's currently a Tridion 2011 issue to publish twice the same image. Both the big and the small version would came actually from the same multimedia component so shouldn't be possible to publish both of them or playing with the names, the first one would be allways gone, because the path would be updated with the second one :-S.
Any ideas?
I have built an image re-sizing TBB in the past which reads the output of a Dreamweaver or XSLT template. The idea is to produce a tag like the following with the first template.
<img src="tcm:1-123" maxWidth="250" maxHeight="400"
cropPosition="middle" variantId="250x400"
action="PostProcess" enlargeIfTooSmall="true"
/>
The "Re-Sizing" TBB then post processes the Output item in the package, looking for nodes with the PostProcess action.
It then creates a variant of the Multimedia Component using the System.Drawing library according to the maxHieght and maxWidth dimention attributes, and publishes it using the AddBinary() method #frank mentioned and using the variantId attribute for a filename prefix, and variant id (and replaces the SRC attribute with the URL of the new binary).
To make this 100% flexible, if either of the maxHeight or maxWidth attributes are set to 0, the TBB re-sizes based on just the "non-zero" dimension, or if both are set it crops the image based on the cropPosition attribute. This enables us to make sqare thumbnails for both landscape and portrait images without distorting them. The enlargeIfTooSmall attribute is use to prevent small images from being stretched to much.
You can see samples of the final galleries here: http://medicine.yale.edu/web/help/examples/specific/photo/index.aspx
and other image re-sizeing examples here:
http://medicine.yale.edu/web/help/examples/general/images.aspx
All of the images are just uploaded to the CMS once, and then re-sized and cropped on the fly at publish time.
Tridion can perfectly well publish multiple variants on a single MMC. When you call AddBinary you can specify that this binary is a variant of the MMC, with each variant being identified by a simple string that you specify.
public Binary AddBinary(
Stream content,
string filename,
StructureGroup location,
string variantId,
Component relatedComponent,
string mimeType
)
As you can see you can also specify the filename for the binary. If you do, it is your responsibility that variants have unique filenames and filenames between different MMCs remain unique. So typically, it is easiest to simply prefix or suffix the filename with some indication of the variantId: <MmcImageFileName>_thumbnail.jpg.
For a recent demo project, I took a completely different approach. The binaries are all published to a broker database. They are extracted from the broker with an HttpModule, which writes the binary data to the file system.
I made it possible to encode the desired width or height in the URL of the image (of course, for binaries that are not images this part of the logic will not work). The module then resizes the image on the fly (truly on the fly, not during publishing!) and writes the resized version to the disk.
For example: if I request /Images/photo.jpg, I will get the original image. If I request /Images/photo_h100.jpg, I get a version of 100 pixels high. The url /Images/photo_w150.jpg leads to a width of 150 pixels.
No variants needed, no republishing because of different size requirements either: resizing is completely done on demand! The performance penalty on the server is negligible: each size is generated only once, until the image is republished.
I used .NET, but of course it can work in Java as well.
Following the Frank's and Quirijn's answer you may be interested on resize the image in a Cartridge Claims processor using the Ambient Data Framework. This solution would be technology agnostic and can be re-used in both Java and .Net. You just need to put the resized image bytes in a Claim and then use it in Java or .Net.
Java Claims Processor:
public void onRequestStart(ClaimStore claims) throws AmbientDataException {
int publicationId = getPublicationId();
int binaryId = getBinaryId();
BinaryContentDAO bcDAO = (BinaryContentDAO)StorageManagerFactory.getDAO(publicationId, StorageTypeMapping.BINARY_CONTENT);
BinaryContent binaryContent = bcDAO.findByPrimaryKey(publicationId, binaryId, null);
byte[] binaryBuff = binaryContent.getContent();
pixelRatio = getPixelRatio();
int resizeWidth = getResizeWidth();
BufferedImage original = ImageIO.read(new ByteArrayInputStream(binaryBuff));
if (original.getWidth() < MAX_IMAGE_WIDTH) {
float ratio = (resizeWidth * 1.0f) / (float)MAX_IMAGE_WIDTH;
float width = original.getWidth() * ratio;
float height = original.getHeight() * ratio;
BufferedImage resized = new BufferedImage(Math.round(width), Math.round(height), original.getType());
Graphics2D g = resized.createGraphics();
g.setComposite(AlphaComposite.Src);
g.drawImage(original, 0, 0, resized.getWidth(), resized.getHeight(), null);
g.dispose();
ByteArrayOutputStream output = new ByteArrayOutputStream();
BinaryMeta meta = new BinaryMetaFactory().getMeta(String.format("tcm:%s-%s", publicationId, binaryId));
String suffix = meta.getPath().substring(meta.getPath().lastIndexOf('.') + 1);
ImageIO.write(resized, suffix, output);
binaryBuff = output.toByteArray();
}
claims.put(new URI("taf:extensions:claim:resizedimage"), binaryBuff);
}
.Net HTTP Handler:
public void ProcessRequest(HttpContext context) {
if (context != null) {
HttpResponse httpResponse = HttpContext.Current.Response;
ClaimStore claims = AmbientDataContext.CurrentClaimStore;
if (claims != null) {
Codemesh.JuggerNET.byteArray javaArray = claims.Get<Codemesh.JuggerNET.byteArray>("taf:extensions:claim:resizedimage");
byte[] resizedImage = javaArray.ToNative(javaArray);
if (resizedImage != null && resizedImage.Length > 0) {
httpResponse.Expires = -1;
httpResponse.Flush();
httpResponse.BinaryWrite(resizedImage);
}
}
}
}
Java Filter:
#Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {
ClaimStore claims = AmbientDataContext.getCurrentClaimStore();
if (claims != null) {
Object resizedImage = claims.get(new URI("taf:extensions:claim:resizedimage"));
if (resizedImage != null) {
byte[] binaryBuff = (byte[])resizedImage;
response.getOutputStream().write(binaryBuff);
}
}
}
Hi:
IN my application,I have some images saved in the db,so I create a ImgDownLoad.aspx to retrive the image and retun them,since the image in the db may very large(some of them is more than 20M),so I generate some thumbnails ,this is the code:
page_load(){
string id=Requset.QueryString["id"];
string imgtype=Requset.Querystring["itype"];
if(imgType=="small")
{
//request the thumbnail
string small_loaction=getSmallLocationById(id);
if(!File.exists(small_location)
{
byte[] img_stream =getStreamFromDb(id);
Image img=Image.frameStream(new MemsorStream(img_steam));//here,I often get the out of memory error,but I am not sure when it will happen.
generateSmallImage(img,location)
}
Response.TransferFile(small_location);
}
else if(imgType=="large"){
byte[] img_stream =getStreamFromDb(id);
new MemorySteam(img_stream).writeTo(Response.outputstream);
}
}
Anything wrong?
ALso,since I do not know the image format,so I can not add the
Response.contenttype="image/xxx";
What confusing me most is that I will meet the out of memory error,so I change the code:
try{
byte[] img_stream =getStreamFromDb(id);
Image img=Image.frameStream(new MemsorStream(img_steam));//here,I often get the out of memory error,but I am not sure when it will happen.
generateSmallImage(img,location)
}
catche(exceptin e){
//the small image can not generated,just return the whole image
new MemorySteam(img_stream).writeTo(Response.outputstream);
return;
}
In this case,I will avoid the out of memory problem,but some large image can not downloaded sometime.
So I wonder if there are any ways to handle the large image stream?
Take a large image for exmaple:
resolution:12590x4000
size:26M.
In fact,I have opened a large image(almost 24M) with the mspaint,and then save the image again,I found that it size is much smaller than at first. So is it possible to resize the image in the server side? Or other good manners to hanle my problem?
Firstly, you're not disposing of the Image and Stream type instances that you create - given subsequent calls, over time, this is bound to cause issues; particularly with images around the 20meg mark!
Also, why create the thumbnails every call? Create once and cache, or flush to disk: either way, serve a 'one you made earlier' rather than do this processing over and over.
I would recommend, however, you try an minimise the size (in bytes) of the images. Some might argue that they shouldn't be in the database if over 1meg, but store them on disk and a file name in the database. I guess that's open for debate, browse if interested.
To your comment, I'd urge you not to allow other scopes take control of resources 'owned' by another; dispose of items in the scope that creates them (obviously sometimes some things need to stick around, but what is responsible for them should be clear). Here's a little rework of some of your code:
if (imgType == "small")
{
string small_loaction = getSmallLocationById(id);
if(!File.exists(small_location)
{
byte[] imageBytes = getStreamFromDb(id);
using (var imageStream = new MemoryStream(imageBytes))
{
using (var image = Image.FromStream(imageStream))
{
generateSmallImage(image, small_location)
}
}
}
Response.TransferFile(small_location);
}
else if (imgType=="large")
{
byte[] imageBytes = getStreamFromDb(id);
Response.OutputStream.Write(imageBytes, 0, imageBytes.Length);
}
I tried using phonon to play the video but could not succeed. Off-late came to know through the Qt forums that even the latest version of Qt does not support phonon. That's when I started using Gstreamer. Any suggestions as to how to connect the Gstreamer window with the Qt widget? My aim is to play a video using Gstreamer on the Qt widget. So how do I link the Gstreamer window and the Qt widget?
I am successful in getting the Id of the widget through winid().
Further with the help of Gregory Pakosz, I have added the below 2 lines of code in my application -
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId());
However am not able to link the Qt widget with the gstreamer video window.
This is what my sample code would look like :-
int main(int argc, char *argv[])
{
printf("winid=%d\n", w.winId());
gst_init (NULL,NULL);
/* create a new bin to hold the elements */
bin = gst_pipeline_new ("pipeline");
/* create a disk reader */
filesrc = gst_element_factory_make ("filesrc", "disk_source");
g_assert (filesrc);
g_object_set (G_OBJECT (filesrc), "location", "PATH_TO_THE_EXECUTABLE", NULL);
demux = gst_element_factory_make ("mpegtsdemux", "demuxer");
if (!demux) {
g_print ("could not find plugin \"mpegtsmux\"");
return -1;
}
vdecoder = gst_element_factory_make ("mpeg2dec", "decode");
if (!vdecoder) {
g_print ("could not find plugin \"mpeg2dec\"");
return -1;
}
videosink = gst_element_factory_make ("xvimagesink", "play_video");
g_assert (videosink);
/* add objects to the main pipeline */
gst_bin_add_many (GST_BIN (bin), filesrc, demux, vdecoder, videosink, NULL);
/* link the elements */
gst_element_link_many (filesrc, demux, vdecoder, videosink, NULL);
gst_element_set_state(videosink, GST_STATE_READY);
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId());
/* start playing */
gst_element_set_state (bin, GST_STATE_PLAYING);
}
Could you explain more in detail about the usage of gst_x_overlay_set_xwindow_id() wrt my context?
Could I get any hint as to how I can integrate gstreamer under Qt?
Please help me solve this problem.
I just did this same thing using python. What I had to do was connect to 'sync-message::element' on the bus and listen for a message called 'prepare-xwindow-id' (disregard the name as it works on all platforms, not just X11) sent after the video sink is setup. It sends you the sink inside that message, and that is where you pass it the window id.
The sample code given above will link GStreamer video window to QtWidget provided the elements are linked correctly.
filesrc should be linked to the demuxer
decoder should be linked to the filesink
Finally, the demuxer should be linked to the decoder at runtime
// link filesrc to demuxer
gst_element_link(filesrc,demux)
// link vdecoder to filesink
gst_element_link_many(vdecoder,filesink,NULL)
/*
The demuxer will be linked to the decoder dynamically.
The source pad(s) will be created at run time,
by the demuxer when it detects the amount and nature of streams.
Connect a callback function which will be executed
when the "pad-added" is emitted.
*/
g_signal_connect(demux,"pad-added",G_CALLBACK(on_pad_added),vdecoder);
// callback definition
static void on_pad_added(GstElement* element,GstPad* pad,gpointer* data)
{
GstPad* sinkpad;
GstElement * decoder = (GstElement*)data;
GstCaps* caps;
GstStructure* str;
gchar* tex;
caps = gst_pad_get_caps(pad);
str = gst_caps_get_structure(caps,0);
tex = (gchar*)gst_structure_get_name(str);
if(g_strrstr(tex,"video"))
{
sinkpad = gst_element_get_static_pad(decoder,"sink");
gst_pad_link(pad,sinkpad);
gst_object_unref(sinkpad);
}
}
http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/tests/examples/overlay
has a minimal Qt example.
In your code, you should probably set the window ID before you do the state change to ready (I'm not 100% sure this is the problem though).
For playback, you should idally use the playbin2 element, something like this (completely untested):
GstElement *playbin, *videosink;
gchar *uri;
playbin = gst_element_factory_make ("playbin2", "myplaybin");
videosink = gst_element_factory_make ("xvimagesink", NULL);
g_object_set (playbin, "video-sink", videosink, NULL);
uri = g_filename_to_uri ("/path/to/file", NULL, NULL);
g_object_set (playbin, "uri", uri, NULL);
g_free (uri);
/* NOTE: at this point your main window needs to be realized,
* ie visible on the screen, and you might need to make sure
* that your widget w indeed has a 'native window' (just some
* things to check for if it doesn't work; there should be Qt
* API for this kind of thing if needed) */
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId());
gst_element_set_state (playbin, GST_STATE_PLAYING);
.. check for messages like error/statechanges/tags/eos on pipeline/playbin bus
A project wrapping gstreamer into usable C++/Qt classes including example code:
http://code.google.com/p/qbtgstreamer/
I don't know about a direct approach, as I am not familiar with gstreamer itself.