I need to create a NPAPI browser plugin to show a picture.
Till now, I have create that and draw a picture in plugin with GDK successfully, I catch the event in "NPP_HandleEvent()" function and get a GdkDrawable object with method "gdk_pixmap_lookup", just like the following codes:
int16_t NPP_HandleEvent(NPP instance, void* event) {
XGraphicsExposeEvent *expose = &nativeEvent->xgraphicsexpose;
instanceData->window.window = (void*)(expose->drawable);
GdkNativeWindow nativeWinId = (XID)(instanceData->window.window);
GdkDrawable* gdkWindow = GDK_DRAWABLE(gdk_pixmap_lookup(xid));
NPSetWindowCallbackStruct *ws_info = (NPSetWindowCallbackStruct*)(instanceData->window.ws_info);
GdkVisual* gdkVisual = gdkx_visual_get(ws_info->visual ? XVisualIDFromVisual(ws_info->visual) : 0);
GdkColormap* gdkColormap = gdk_x11_colormap_foreign_new(gdkVisual, ws_info->colormap);
gdk_drawable_set_colormap(gdkWindow, gdkColormap);
// another method, to use GdkWindow to draw picture.
drawWindow(instanceData, gdkWindow);
}
so now, I'm wondering how to use QT methods draw a picture in this plugin ? any ways just use QT and X11 releated stubs to implement this ?
Related
I need to add a bunch of pins on a MapsUI map control that uses OpenStreetMap tiles.
The problem is I cant find anything mentioning pins in their documentation so my question is:
Are there any pins available but have different names or i have to draw pins myself (using Geometry and Points as in some examples i found) and if so how do i keep them the same size when zooming the map ?
Maybe someone can point out where should i look in their documentation in case I'm blind and missed it.
Thanks!
Your Mapsui Map View Mapsui.UI.Forms.MapView has property Pins. You can add pins there.
You can access MapView View from code-behind *xaml.cs of your MapView.
For that, first, name your Map View in XAML Page:
<ContentPage
xmlns:mapsui="clr-namespace:Mapsui.UI.Forms;assembly=Mapsui.UI.Forms"
xmlns="http://xamarin.com/schemas/2014/forms"
xmlns:x="http://schemas.microsoft.com/winfx/2009/xaml"
x:Class="YourProject.Pages.YourMap">
<ContentPage.Content>
<mapsui:MapView
x:Name="selectMapControl"/>
</ContentPage.Content>
</ContentPage>
Then access it from C# code-behind of that Page:
using Mapsui.UI.Forms;
protected override async void OnAppearing()
{
selectMapControl.Pins.Add(new Pin(new MapView())
{
Position = new Position(0, 0),
Type = PinType.Pin,
Label = "Zero point",
Address = "Zero point",
});
}
Hope this simplest example will help.
The idea is that you use your own bitmaps to draw as 'pins'.
Mapsui has features. A feature has a geometry which can be a point, linestring and polygon (and some others). A feature is drawn with some kind of style. What you need to do is create features with point geometry and use a symbolstyle with a bitmap as symbol. You need to register your bitmap and use the bitmapId in the symbol. See the sample here:
https://github.com/Mapsui/Mapsui/blob/master/Samples/Mapsui.Samples.Common/Maps/PointsSample.cs
You can use Xamarin.Forms.GoogleMaps as they give you the feature to add multiple pins of your choice.
Here is the sample code :
var position = new Position(Latitude, Longitude);
var pin = new Pin
{
Type = PinType.Place,
Position = position,
Icon = BitmapDescriptorFactory.FromBundle("pin.png"),
Label = "custom pin",
Address = "custom detail info",
};
MyMap.Pins.Add(pin);
The following works on version 3.02. I've not checked it on any other version of MapSui.
First make sure your pin Bitmap is an embedded resource. You can then get the Bitmap ID like this:
var assembly = typeof(YourClass).GetTypeInfo().Assembly;
var image = assembly.GetManifestResourceStream("YourSolution.YourProject.AFolder.image.png");
If var image returns null, then the image was not found and it's likely not an embedded resource or you got the address/name wrong.
var ID = BitmapRegistry.Instance.Register(image);
The BitmapRegistry method also registers the BitMap for MapSui to use later. I think if it's your first image registered it will be 0.
Then you can create a memory layer as follows:
MemoryLayer PointLayer = new MemoryLayer
{
Name = "Points",
IsMapInfoLayer=true,
DataSource = new MemoryProvider(GetListOfPoints()),
Style = new SymbolStyle { BitmapId = ID, SymbolScale = 0.50, SymbolOffset = new Offset(0, bitmapHeight * 0.5) };
};
The DataSource can be generated as follows (I'm just adding one feature, but you can add as many as you like):
private IEnumerable<IFeature> GetListOfPoints()
{
List<IFeature> list = new List<IFeature>();
var feature = new Feature();
feature.Geometry = new Point(-226787, 7155483);
feature["name"] = "MyPoint";
list.Add(feature);
IEnumerable<IFeature> points = list as IEnumerable<IFeature>;
return points;
}
Then add the new MemoryLayer to your Map as follows:
MapControl.Map?.Layers.Add(PointLayer);
I want to use QAudioRecorder to record an audio and save as a file and display the filepath to the user. I had tried using the the example from Qt but there's no feed on the buffer value when I tested it on Android. It works on my Desktop though. Below are part of my codes:
AudioRecord::AudioRecord(QWidget *parent)
{
audioRecorder = new QAudioRecorder(this);
probe = new QAudioProbe;
connect(probe, SIGNAL(audioBufferProbed(QAudioBuffer)),
this, SLOT(processBuffer(QAudioBuffer)));
probe->setSource(audioRecorder);
}
void AudioRecord::processBuffer(const QAudioBuffer& buffer)
{
qDebug()<<"Testing Successful";
}
The processBuffer function does not seems to be called. What should I do to get the buffer value work? Is there any other way around?
Thanks!
In my cocos3d application, i have a UIViewController as rootview controller and from there i'll launch again Cocos3d scenes when use clicking on an option.
Here is my code below to launch scene from this view controller. Issue is, if i open uiviewcontroller and then move to scene by clicking on an option more than twice, then i'm getting error as "OpenGL error 0x0506 in -[EAGLView swapBuffers]
[GL ERROR] Unknown GL error (1286), before drawing HomeOwners3DScene Unnamed:1691. To investigate further, set the preprocessor macro GL_ERROR_TRACING_ENABLED=1 in your project build settings."
[This issue doesn't come first time when i view the scene, it shows properly. But if i go back and click back to show the scene more than twice or more, then it appeared to be blank scene with the below error in xcode]
Code below to move from my viewcontroller to further scenes:
-(void) launchMainScene :(UIViewController *) uiViewCrtller
{
[uiViewCrtller.view removeFromSuperview];
// This must be the first thing we do and must be done before establishing view controller.
if( ! [CCDirector setDirectorType: kCCDirectorTypeDisplayLink] )
[CCDirector setDirectorType: kCCDirectorTypeDefault];
// Default texture format for PNG/BMP/TIFF/JPEG/GIF images.
// It can be RGBA8888, RGBA4444, RGB5_A1, RGB565. You can change anytime.
CCTexture2D.defaultAlphaPixelFormat = kCCTexture2DPixelFormat_RGBA8888;
// Create the view controller for the 3D view.
viewController = [CC3DeviceCameraOverlayUIViewController new];
//viewController.supportedInterfaceOrientations = UIInterfaceOrientationLandscapeRight | UIInterfaceOrientationMaskLandscapeLeft;
// Create the CCDirector, set the frame rate, and attach the view.
CCDirector *director = CCDirector.sharedDirector;
//director.runLoopCommon = YES; // Improves display link integration with UIKit
[director setDeviceOrientation:kCCDeviceOrientationPortrait];
director.animationInterval = (1.0f / kAnimationFrameRate);
director.displayFPS = YES;
director.openGLView = viewController.view;
// Enables High Res mode on Retina Displays and maintains low res on all other devices
// This must be done after the GL view is assigned to the director!
[director enableRetinaDisplay: YES];
[director setDepthTest:NO];
// Create the window, make the controller (and its view) the root of the window, and present the window
[window addSubview: viewController.view];
CCDirector.sharedDirector.displayFPS = NO;
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
// Set to YES for Augmented Reality 3D overlay on device camera.
// This must be done after the window is made visible!
viewController.isOverlayingDeviceCamera = NO;
// Create the customized CC3Layer that supports 3D rendering and schedule it for automatic updates.
CC3Layer* cc3Layer = [HomeOwners3DLayer node];
[cc3Layer scheduleUpdate];
// Create the customized 3D scene and attach it to the layer.
// Could also just create this inside the customer layer.
cc3Layer.cc3Scene = [HomeOwners3DScene scene];
// Assign to a generic variable so we can uncomment options below to play with the capabilities
CC3ControllableLayer* mainLayer = cc3Layer;
CCScene *scene = [CCScene node];
[scene addChild: mainLayer];
[[CCDirector sharedDirector] runWithScene: scene];
}
Below Error is throwing in Xcode console when i launch scene and go back viewcontroller and launch scene multiple times with showing blank screen in the scene:
*[This issue doesn't come first time when i view the scene, it shows properly. But if i go back and click back to show the scene more than twice or more, then it appeared to be blank scene with the below error in xcode]*
OpenGL error 0x0506 in -[EAGLView swapBuffers]
[GL ERROR] Unknown GL error (1286), before drawing HomeOwners3DScene Unnamed:1691. To investigate further, set the preprocessor macro GL_ERROR_TRACING_ENABLED=1 in your project build settings.
Please help to solve my problem.
Thank you.
I need to play a sound when a button is clicked, I have this:
Phonon::MediaObject *clickObject = new Phonon::MediaObject(this);
clickObject->setCurrentSource(Phonon::MediaSource("Click/sound.wav");
Phonon::AudioOutput *clickOutput = new Phonon::AudioOutput(Phonon::MusicCategory, this);
Phonon::createPath(clickObject, clickOutput);
and
void MainWindow::on_pushButton_clicked()
{
clickObject->play();
}
but no sound is played?
Where am I wrong?
Thanks.
EDIT: It works now, it was the wrong path.
Probably the file path "Click/sound.wav" doesn't point where you think it points.
Try this before calling the setCurrentSource()-function:
bool exists = QFile::exists("Click/sound.wav");
If the Click directory is supposed to be in the same directory as your exe, create the path like this:
QString filePath = QCoreApplication::applicationDirPath() + "/Click/sound.wav";
clickObject->setCurrentSource(Phonon::MediaSource(filePath));
And I would suggest using Qt resource system. Then you would point to the sound file like this:
clickObject->setCurrentSource(Phonon::MediaSource(":/Click/sound.wav"));
You should at least connect the signal stateChanged(Phonon::State, Phonon::State) from your MediaObject object to a custom slot to detect errors: if the state changes to Phonon::ErrorState the reason of the error might be accessible through QMediaObject::errorString().
I tried using phonon to play the video but could not succeed. Off-late came to know through the Qt forums that even the latest version of Qt does not support phonon. That's when I started using Gstreamer. Any suggestions as to how to connect the Gstreamer window with the Qt widget? My aim is to play a video using Gstreamer on the Qt widget. So how do I link the Gstreamer window and the Qt widget?
I am successful in getting the Id of the widget through winid().
Further with the help of Gregory Pakosz, I have added the below 2 lines of code in my application -
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), widget->winId());
However am not able to link the Qt widget with the gstreamer video window.
This is what my sample code would look like :-
int main(int argc, char *argv[])
{
printf("winid=%d\n", w.winId());
gst_init (NULL,NULL);
/* create a new bin to hold the elements */
bin = gst_pipeline_new ("pipeline");
/* create a disk reader */
filesrc = gst_element_factory_make ("filesrc", "disk_source");
g_assert (filesrc);
g_object_set (G_OBJECT (filesrc), "location", "PATH_TO_THE_EXECUTABLE", NULL);
demux = gst_element_factory_make ("mpegtsdemux", "demuxer");
if (!demux) {
g_print ("could not find plugin \"mpegtsmux\"");
return -1;
}
vdecoder = gst_element_factory_make ("mpeg2dec", "decode");
if (!vdecoder) {
g_print ("could not find plugin \"mpeg2dec\"");
return -1;
}
videosink = gst_element_factory_make ("xvimagesink", "play_video");
g_assert (videosink);
/* add objects to the main pipeline */
gst_bin_add_many (GST_BIN (bin), filesrc, demux, vdecoder, videosink, NULL);
/* link the elements */
gst_element_link_many (filesrc, demux, vdecoder, videosink, NULL);
gst_element_set_state(videosink, GST_STATE_READY);
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId());
/* start playing */
gst_element_set_state (bin, GST_STATE_PLAYING);
}
Could you explain more in detail about the usage of gst_x_overlay_set_xwindow_id() wrt my context?
Could I get any hint as to how I can integrate gstreamer under Qt?
Please help me solve this problem.
I just did this same thing using python. What I had to do was connect to 'sync-message::element' on the bus and listen for a message called 'prepare-xwindow-id' (disregard the name as it works on all platforms, not just X11) sent after the video sink is setup. It sends you the sink inside that message, and that is where you pass it the window id.
The sample code given above will link GStreamer video window to QtWidget provided the elements are linked correctly.
filesrc should be linked to the demuxer
decoder should be linked to the filesink
Finally, the demuxer should be linked to the decoder at runtime
// link filesrc to demuxer
gst_element_link(filesrc,demux)
// link vdecoder to filesink
gst_element_link_many(vdecoder,filesink,NULL)
/*
The demuxer will be linked to the decoder dynamically.
The source pad(s) will be created at run time,
by the demuxer when it detects the amount and nature of streams.
Connect a callback function which will be executed
when the "pad-added" is emitted.
*/
g_signal_connect(demux,"pad-added",G_CALLBACK(on_pad_added),vdecoder);
// callback definition
static void on_pad_added(GstElement* element,GstPad* pad,gpointer* data)
{
GstPad* sinkpad;
GstElement * decoder = (GstElement*)data;
GstCaps* caps;
GstStructure* str;
gchar* tex;
caps = gst_pad_get_caps(pad);
str = gst_caps_get_structure(caps,0);
tex = (gchar*)gst_structure_get_name(str);
if(g_strrstr(tex,"video"))
{
sinkpad = gst_element_get_static_pad(decoder,"sink");
gst_pad_link(pad,sinkpad);
gst_object_unref(sinkpad);
}
}
http://cgit.freedesktop.org/gstreamer/gst-plugins-base/tree/tests/examples/overlay
has a minimal Qt example.
In your code, you should probably set the window ID before you do the state change to ready (I'm not 100% sure this is the problem though).
For playback, you should idally use the playbin2 element, something like this (completely untested):
GstElement *playbin, *videosink;
gchar *uri;
playbin = gst_element_factory_make ("playbin2", "myplaybin");
videosink = gst_element_factory_make ("xvimagesink", NULL);
g_object_set (playbin, "video-sink", videosink, NULL);
uri = g_filename_to_uri ("/path/to/file", NULL, NULL);
g_object_set (playbin, "uri", uri, NULL);
g_free (uri);
/* NOTE: at this point your main window needs to be realized,
* ie visible on the screen, and you might need to make sure
* that your widget w indeed has a 'native window' (just some
* things to check for if it doesn't work; there should be Qt
* API for this kind of thing if needed) */
QApplication::syncX();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(videosink), w.winId());
gst_element_set_state (playbin, GST_STATE_PLAYING);
.. check for messages like error/statechanges/tags/eos on pipeline/playbin bus
A project wrapping gstreamer into usable C++/Qt classes including example code:
http://code.google.com/p/qbtgstreamer/
I don't know about a direct approach, as I am not familiar with gstreamer itself.