I try to scale an image derivedfrom a file, to any (sensible) scale.
The problem is, cairo somhow autoblurrs it. How can I fix/remove it? The aim is to see the individual pixels.
Thanks for any reply.
Edit: Some code, triggering on "draw" event, parent is a GtkDrawingArea
static gboolean
cb_event_draw (GtkWidget *obj, cairo_t *cr, gpointer data)
{
guint width, height;
width = gtk_widget_get_allocated_width (obj);
height = gtk_widget_get_allocated_height (obj);
_priv = ...; //some struct
// cairo_save (cr);
cairo_set_antialias (cr, CAIRO_ANTIALIAS_NONE);
cairo_scale (cr, _priv->zoom, _priv->zoom);
cairo_set_source_surface (cr, _priv->image, 0., 0.);
cairo_set_antialias (cr, CAIRO_ANTIALIAS_NONE);
cairo_pattern_set_filter (cr, CAIRO_FILTER_FAST); // no matter if this is there or not
// it does actually, matter, works with this:
// cairo_pattern_set_filter (cairo_get_source (cr), CAIRO_FILTER_FAST);
cairo_paint (cr);
// print some markers at defined locations
return FALSE;
}
I suspect that you need:
cairo_pattern_set_filter(cairo_get_source(cr), CAIRO_FILTER_FAST);
Related
I want to display an image received in a short[] of pixels from a server.
The server(C++) writes the image as an unsigned short[] of pixels (12 bit depth).
My java application gets the image by a CORBA call to this server.
Since java does not have ushort, the pixels are stored as (signed) short[].
This is the code I'm using to obtain a BufferedImage from the array:
private WritableImage loadImage(short[] pixels, int width, int height) {
int[] intPixels = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
intPixels[i] = (int) pixels[i];
}
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
WritableRaster raster = (WritableRaster) image.getData();
raster.setPixels(0, 0, width, height, intPixels);
return SwingFXUtils.toFXImage(image, null);
}
And later:
WritableImage orgImage = convertShortArrayToImage2(image.data, image.size_x, image.size_y);
//load it into the widget
Platform.runLater(() -> {
imgViewer.setImage(orgImage);
});
I've checked that width=1280 and height=1024 and the pixels array is 1280x1024, that matches with the raster height and width.
However I'm getting an array out of bounds error in the line:
raster.setPixels(0, 0, width, height, intPixels);
I have try with ALL ImageTypes , and all of them produce the same error except for:
TYPE_USHORT_GRAY: Which I thought it would be the one, but shows an all-black image
TYPE_BYTE_GRAY: which show the image in negative(!) and with a lot of grain(?)
TYPE_BYTE_INDEXED: which likes the above what colorized in a funny way
I also have tried shifting bits when converting from shot to int, without any difference:
intPixels[i] = (int) pixels[i] & 0xffff;
So..I'm quite frustrated after looking for days a solution in the internet. Any help is very welcome
Edit. The following is an example of the images received, converted to jpg on the server side. Not sure if it is useful since I think it is made from has pixel rescaling (sqrt) :
Well, finally I solved it.
Probably not the best solution but it works and could help someone in ether....
Being the image grayscale 12 bit depth, I used BufferedImage of type TYPE_BYTE_GRAY, but I had to downsample to 8 bit scaling the array of pixels. from 0-4095 to 0-255.
I had an issue establishing the higher and lower limits of the scale. I tested with avg of the n higher/lower limits, which worked reasonably well, until someone sent me a link to a java program translating the zscale algorithm (used in DS9 tool for example) for getting the limits of the range of greyscale vlues to be displayed:
find it here
from that point I modified the previous code and it worked like a charm:
//https://github.com/Caltech-IPAC/firefly/blob/dev/src/firefly/java/edu/caltech/ipac/visualize/plot/Zscale.java
Zscale.ZscaleRetval retval = Zscale.cdl_zscale(pixels, width, height,
bitsVal, contrastVal, opt_sizeVal, len_stdlineVal, blankValueVal);
double Z1 = retval.getZ1();
double Z2 = retval.getZ2();
try {
int[] ints = new int[pixels.length];
for (int i = 0; i < pixels.length; i++) {
if (pixels[i] < Z1) {
pixels[i] = (short) Z1;
} else if (pixels[i] > Z2) {
pixels[i] = (short) Z2;
}
ints[i] = ((int) ((pixels[i] - Z1) * 255 / (Z2 - Z1)));
}
BufferedImage bImg
= new BufferedImage(width, height, BufferedImage.TYPE_BYTE_GRAY);
bImg.getRaster().setPixels(0, 0, width, height, ints);
return SwingFXUtils.toFXImage(bImg, null);
} catch (Exception ex) {
System.out.println(ex.getMessage());
}
return null;
I'm trying to understand how do I use a QSGSimpleTextureNode but Qt documentation is very vague. I want to render text on the scene graph, so basically what I want is to draw a texture with all the glyphs and then set that texture on a QSGSimpleTextureNode. My idea was to create the texture using standard OpenGL code and set the texture data to the data I have just created. I can't find an example to show me how to achieve this.
I would use QSGGeometryNode instead of QSGSimpleTextureNode. If I am not wrong it is not possible to set the texture coordinates in a QSGSimpleTextureNode. You could write a custom QQuickItem for the SpriteText and override the updatePaintNode:
QSGNode* SpriteText::updatePaintNode(QSGNode *old, UpdatePaintNodeData *data)
{
QSGGeometryNode* node = static_cast<QSGGeometryNode*>(old);
if (!node){
node = new QSGGeometryNode();
}
QSGGeometry *geometry = NULL;
if (!old){
geometry = new QSGGeometry(QSGGeometry::defaultAttributes_TexturedPoint2D()
,vertexCount);
node->setFlag(QSGNode::OwnsGeometry);
node->setMaterial(material); // <-- Texture with your glyphs
node->setFlag(QSGNode::OwnsMaterial);
geometry->setDrawingMode(GL_TRIANGLES);
node->setGeometry(geometry);
} else {
geometry = node->geometry();
geometry->allocate(vertexCount);
}
if (textChanged){
//For every Glyph in Text:
//Calc x + y position for glyph in texture (between 0-1)
//Create vertexes with calculated texture coordinates and calculated x coordinate
geometry->vertexDataAsTexturedPoint2D()[index].set(...);
...
node->markDirty(QSGNode::DirtyGeometry);
}
//you could start timer here which call update() methode
return node;
}
I have a scene with an inverted y-axis. Everything is correctly drawn except QImages.
I use drawIage() as:
QRectF aWorldRect = ...
QRectF anImageRect = QRectF(0, 0, theQImage.width(), theQImage.height())
thePainter->drawImage(aWorldRect, theQImage, anImageRect;
I get undefined graphics outside (to the top of) where the image should be displayed. This is normal because y-axis is inverted. So I expected something like that may fix the issue:
QRectF anImageRect = QRectF(0, 0, imgWidth, -imgHeight)
It has the same effect. If I do aWorldRect = aWorldRect.noralized() before calling drawImage(), I get the image in the correct rectangle but mirrored so I did aQImage = aQImage.mirrored(). Now the image is correctly displayed in the correct rectangle
I consider this a workaround which I don't like to keep. So, can someone tell me what should be done to get the image displayed, the right way?
Update
Here I put a minimal sample of my problem that is ready to compile:
Update 2014-04-09 10:05 EET
Updated the sample code little bit to make really work using the workaround
#include <QtGui>
const int WIDTH = 640;
const int HEIGHT = 480;
class View : public QGraphicsView
{
protected:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg"); // or any other
/* The next three lines makes everything displayed correctly but
should be considered a workaround */
/* I ignore the rect that is passed to the function on purpose */
QRectF imageRect = QRectF(QPointF(0, 0), QPointF(img.width(), img.height()));
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
}
};
int main(int argc, char *argv[])
{
QApplication a(argc, argv);
View w;
/* I don't want to change the code below */
w.setScene(new QGraphicsScene(QRectF(QPointF(0, HEIGHT), QPointF(WIDTH, 0))));
w.scale(1, -1);
w.scene()->addLine(0, HEIGHT, WIDTH, 0);
w.showMaximized();
return a.exec();
}
The approach of reversing the Y coordinate value is right but the implementation was faulty.
QRectF's documentation shows that it takes (x, y, width, height). Giving height as negative makes little sense. Instead try the other constructor which takes topLeft and bottomRight.
QRectF anImageRect(QPointF(0.0f, -imgHeight), QPointF(imageWidth, 0.0f));
EDIT:
It seems that the only drawings like line, arc, etc. are affected by the scale (1, -1) transform you set on the view. drawImage continues to render upside down due to the scale set. The simple fix is to set the scale back to (1, -1). Here's the updated code:
void drawBackground(QPainter *p, const QRectF & rect)
{
QImage img = QImage("/usr/share/backgrounds/images/stone_bird.jpg");
// backup the current transform set (which has the earlier scale of (1, -1))
const QTransform oldTransform = p->transform();
// set the transform back to identity to make the Y axis go from top to bottom
p->setTransform(QTransform());
// draw
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img);
// revert back to the earlier transform
p->setTransform(oldTransform);
}
Updated on 2014-04-14 14:35 EET
I could finally solve the problem reliably by replacing the two lines
QRectF theSceneRect = sceneRect().normalized();
p->drawImage(theSceneRect, img.mirrored(), imageRect);
of my question to
QRectF theSceneRect = sceneRect(); // Not normalized. It is no more workaround :)
qreal x = theSceneRect.x();
qreal y = theSceneRect.y();
qreal w = theSceneRect.width();
qreal h = theSceneRect.height();
qreal sx = imageRect.x();
qreal sy = imageRect.y();
qreal sw = imageRect.width();
qreal sh = imageRect.height();
p->translate(x, y);
p->scale(w / sw, h / sh);
p->setBackgroundMode(Qt::TransparentMode);
p->setRenderHint(QPainter::Antialiasing, p->renderHints() &
QPainter::SmoothPixmapTransform);
QBrush brush(img);
p->setBrush(brush);
p->setPen(Qt::NoPen);
p->setBrushOrigin(QPointF(-sx, -sy));
p->drawRect(QRectF(0, 0, sw, sh));
p->restore();
This is inspired by the implementation of the QPainter::drawImage() which is not reliable in such cases due to many if statements handling rectangles with negative values of width or height.
It would be better if I made the solution in another function but I kept it this way to be more compatible with the code in my question.
We know that for drawing on an image in qt, qpainter is used. Recently, I used drawLine() function to draw whatever an user is scribbling. This was done by passing the lastPoint and currentPoint from the mouseMoveEvent to a custom function which implements drawLine(). I have passed the arguments for that custom function as given below:
void myPaint::mouseMoveEvent(QMouseEvent *event) {
qDebug() << event->pos();
if ((event->buttons() & Qt::LeftButton) && scribbling) {
pixelList.append(event->pos());
drawLineTo(event->pos());
lastPoint = event->pos();
}
}
Now with the help of qDebug() I noticed that some pixels are missed while drawing but the drawing is precise. I looked up the source of qt-painting where I saw that drawLine() was calling drawLines() which was making use of qpainterPath to have a shape drawn on the image.
My question is that, is there anyway to track these "missed" pixels or any approach to find all the pixels which have been drawn?
Thanks!
void myPaint::drawLineTo(const QPoint &endPoint) {
QPainter painter(image); //image is initialized in the constructor of myPaint
painter.setRenderHint(QPainter::Antialiasing);
painter.setPen(QPen(Qt::blue, myPenWidth, Qt::SolidLine, Qt::RoundCap,Qt::RoundJoin));
painter.drawLine(lastPoint, endPoint);
modified = true;
lastPoint = endPoint; //at the mousePressEvent, the event->pos() will be stored as
// lastPoint
update();
}
For a start, don't draw in a mouseEvent(). Actually handling a mouseevent should be done as quick as possible. Also, it is not a good idea to look at the Qt source, it can be confusing. Rather assume that what Qt gives you work, and first try to answer "What I am doing wrong?". As I said drawing in a mouse event is definitely wrong.
Your description is really subjective, maybe an image of your output is better. Are you trying to emulate a pen (like in windows paint)? In this case do the mouse button has to be down ? is that the purpose of your variable scribbling?
There is more. following the documentation, QMouseEvent::buttons() always return a combination of all buttons for mouse move event. Which make sense : the mouse movements are independent of the buttons. It means
if ((event->buttons() & Qt::LeftButton)
will always be true.
Let's assume you want to draw the path of your mouse when the left button is pressed. Then you use something like :
void myPaint::mousePressEvent(QMouseEvent *event){
scribbling = true;
pixelList.clear();
}
void myPaint::mouseReleaseEvent(QMouseEvent *event){
scribbling = false;
}
void myPaint::mouseMoveEvent(QMouseEvent *event) {
if ( scribbling) {
pixelList.append(event->pos());
}
}
void myPaint::paintEvent(){
QPainter painter(this)
//some painting here
if ( scribbling) {
painter.setRenderHint(QPainter::Antialiasing);
painter.setPen(QPen(Qt::blue, myPenWidth, Qt::SolidLine, Qt::RoundCap,Qt::RoundJoin));
// here draw your path
// for example if your path can be made of lines,
// or you just put the points if they are close from each other
}
//other painting here
}
If after all of this you don't have a good rendering, try using float precision (slower), ie QMouseEvent::posF() instead of QMouseEvent::pos().
EDIT :
"I want to know whether there is any way to calculate all the sub-pixels between any two pixels that we send as arguments to drawLine"
Yes there is. I don't know why you need to do such thing but is really simple. A line can be characterized with the equation
y = ax + b
Both of the endpoints of the line p0 = (x0, y0) and p1 = (x1, y1) satisfy this equation so you can easily find a and b. Now all you need to do is increment from x0 to x1 by the amount of
pixels you want (say 1), and to compute the corresponding y value, each time saving point(x,y).
So will go over all of the points saved in pixelList and repeat this process for any two consecutive points.
So I'm trying to set up a mask in Cairo, but can't get it to make any difference. Below I have a simple program based off the one here: http://snipplr.com/view/22584/cairo-hello-world-examble/.
I'm setting a completely transparent mask so nothing should be getting drawn, but it doesn't seem to have any effect - the text still gets drawn. My code is below. What am I missing?
Thanks!
int main(int argc, char* argv[])
{
cairo_surface_t* surface;
cairo_t* cr;
surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32, 200, 40);
cr = cairo_create (surface);
//****
// Here I create a pattern with an alpha of zero and set it to be cairo's mask
// According to http://www.cairographics.org/manual/cairo-context.html#cairo-mask
// "Opaque areas of pattern are painted with the source, transparent areas are not painted."
// Shouldn't this make it so nothing gets drawn?
//****
cairo_pattern_t* nothing = cairo_pattern_create_rgba(0,0,0,0);
cairo_mask (cr, nothing);
cairo_text_extents_t te;
cairo_set_source_rgb (cr, 0.0, 0.0, 0.0);
cairo_select_font_face (cr, "Georgia",
CAIRO_FONT_SLANT_NORMAL, CAIRO_FONT_WEIGHT_BOLD);
cairo_set_font_size (cr, 20.0);
cairo_text_extents (cr, "hello cairo!", &te);
cairo_move_to (cr, 20, 20);
cairo_show_text (cr, "hello cairo!");
cairo_fill(cr);
// An image gets drawn that says "hello cairo!" in big letters
cairo_surface_write_to_png(surface, "hello_cairo.png");
return 0;
}
Ok I figured it out. I was expecting cairo_mask() to behave like cairo_clip(). (in cairo_clip() it establishes a clip path that clips every element drawn afterward)
cairo_mask is explained very simply: "cairo_mask -- Paint current source fill pattern using alpha channel mask pattern." That's exactly what it does - fills in the whole screen with the current fill pattern and blends it by whatever alpha is for that pixel on the mask.