I am instantiating a QImage from an image file like below and subsequently rendering it on a QWidget.
QImage ( const QString & fileName, const char * format = 0 )
For most images, everything works fine. But for a few images, the QImage gets loaded with a 90-degree rotated image.
It seems this happens only with pictures taken earlier on my phone in portrait mode. Those taken in landscape are fine
You might need to use a library like libexif to determine the photo orientation and then rotate the QImage accordingly
Since Qt 5.5 set Image.autoTransform : true because the default is false!
QT Image QML Type reference
With C++ Qt-class QImageReader:
QImageReader imgReader( imagePath );
imgReader.setAutoTransform( true );
QImage img = imgReader.read();
https://discussions.apple.com/thread/2541504?start=0&tstart=0
It sounds like it is a pretty common issue, where there is some flag or tag added on the image that says how to rotate it, instead of actually reordering the pixels in the image. For the image you are trying to render, you could go and take the format you are trying to use, and see if there are any extra flags you could check and have Qt do the rotation.
Sounds like cppguy knows of a library that can let you check these flags.
EDIT Found a better description for it:
johninsj - Re: iPhone 4 Photo's & Video Rotating Sideways In Email
Nov 2, 2010 1:45 PM (in response to VibrantRedGT)
Apple sets the jpeg meta tag for orientation when you shoot a photo,
so if you hold the iphone upside down, or sideways, etc, the image
(which is shot upside down or sideways, since the camera is upside
down/sideways) knows it needs to flip/rotate the image when you look
at it.
Not all software honors the rotation settings. Gimp (which runs on
everything, and is free) does.
You can rotate images and save them, or learn to shoot photos with the
iPhone in the correct orientation for non-rotated images. That would
be with the home button to the right as you look at the screen.
Hope that helps.
Related
I'm drawing a bitmap with SkiaSharp in Xamarin Forms. The antialiasing is poor and I would like to know how I can improve it.
The source bitmap has antialiasing, shown in the left of the attached image. The right half shows a part of a screen capture on an Android device.
On the left part, notice that the aliasing is regular - a bit of dark feathering going upward and a bit of light feathering going downward, all uniform. On the right part we still see both feathering but it alternates between double wide pixels and single wide pixels.
When drawing the bitmap
canvas.DrawBitmap(bmpSrc, rectSrcRight, rectDest, paint);
I've tried setting the paint IsAntialiasing() true and false. No significant difference.
In this example, the source image is 61 pixels tall and the Android image is 102.
Am I expecting too much ?
You could have a try with the following code:
SKPaint paint = new SKPaint
{
IsAntialias = true,
FilterQuality = SKFilterQuality.High
};
canvas.DrawBitmap(bmpSrc, rectSrcRight, rectDest, paint);
Although IsAntialias will not work for bitmap, I also add it. If you want to draw bitmap continually, it will use more memory.
If you have a better way to make it work and not using more memory, please share here to discuss.
I am using ASP.Net with VB and doing some file uploads. Sometimes, when a bitmap is constructed from the file input stream, the image gets rotated. It doesn't seem to happen if the image is wider than it is tall, but also doesn't seem to happen all the time if the image is taller than wide. I have provided a few screen shots where the properties of the image are show, and the created bitmap's properties are shown at run time.
Any ideas what is happening here or what we can do to prevent this rotation?
Rotated image:
Non-rotated image:
Using the rotation code found Here fixed it. The images were coming from a phone camera and had original orientation information stored in them that could be used to right them.
I'm new to Android, and I've finished a game which was meant to feature pixel art. I was going to scale up my images (imageviews and bitmaps drawn to canvas) from small pixelated png files. The thing is, I could not seem to disable anti-aliasing whatever method I tried. The image was always blurred.
All my images are in one 'drawable' folder.
I tried android:antialias="false" within the ImageView in the xml.
Tried the method described here: http://www.41post.com/4241/programming/android-disabling-anti-aliasing-for-pixel-art
\
Tried changing the paint (paint.setAntiAlias(false)) when drawing the bitmap onto a canvas.
And even tried linking the ImageView to a xml bitmap drawable with antialias="false"
Am I missing something? In the end I had to just settle with leaving some images blurry and having the big images as big images and not resizing in the xml file.
From the Hardware Acceleration Guide, it looks like Paint#setFilterBitmap() is always enabled and cannot be disabled when hardware acceleration is enabled. Try checking to see if your app is using hardware acceleration.
I've seen similar behavior in the emulator when enabling the "Use host GPU" option, and found that a device that didn't have the anti-aliasing behavior could be forced to have that behavior by using Paint#setFilterBitmap(true). I was not able to disable the behavior in the emulator though without disabling the host GPU option.
I have created an application with a screen resolution of 640 x 360 for the nokia n8. It includes a lot of flickables, labels, etc. I want it to run on the nokia e6 with a resolution of 640 x 480.
Up to now I have simply copied the the QML file and modified it for the new resolution but it's getting a little tiresome to do it for each update. I want to know if there is any simple way I can get it to automatically fit the output to any screen resolution? Or if there is something else I can do to simplify my task. I would prefer not to use anchors because it makes it too complicated to design the QML file.
How about using QApplication::desktop()->availableGeometry() to set the geometry of your application window?
From the docs:
QDesktopWidget::availableGeometry()
Returns the available geometry of the screen with index screen. What is available will be subrect of screenGeometry() based on what the platform decides is available (for example excludes the dock and menu bar on Mac OS X, or the task bar on Windows).
Addressing your comment below:
does it re size the entire screen
The const in QDesktopWidget::availableGeometry() const tells you that you can be pretty sure that the function doesn't alter anything. You'll need to do the resizing yourself.
Edit: The QML docs should give you the information you need to automatically change your application geometry. You could either change the geometry of the QML object from C++ or define your available screen geometry as a Q_PROPERTY and access it from QML. I'd recommend the former, as hooking up to the signal QDesktopWidget::workAreaResized might help you on mobile devices where your available geometery may change.
Actually you should avoid hardcoding the interface pixel by pixel and start using anchors. Ther will be some phones that have yet another screen resolution and then you have to create new QML for each of them. With anchors you can let the content fill all available space
I'm trying to write a Universal Application. The display should be slightly different for different screen resolutions. But when I code like this:
- (void)viewDidLoad {
SCREEN_WIDTH=[[UIScreen mainScreen] applicationFrame].size.width;
SCREEN_HEIGHT=[[UIScreen mainScreen] applicationFrame].size.height;
NSLog(#"w:%f h:%f",SCREEN_WIDTH,SCREEN_HEIGHT);
...
}
I get output: w:320.000000 h:480.000000 even when the simulator is set to
Hardware->Device->iPhone (Retina)
Furthermore, images with this resolution display as full-screen images in the simulator.
I understand I should be getting w:640.000000 h:960.000000.
Is it like this for anyone else? And any ideas why/how to fix?
See the related thread: here
UIScreen will always report the resolution of a Retina Display device as that of a non-Retina Display device. This allows old code to run transparently on such screens. However, UIScreen exposes a scale property which, when combined with the bounds of the screen, can be used to determine the physical pixel resolution of a device:
CGSize PhysicalPixelSizeOfScreen(UIScreen *s) {
CGSize result = s.bounds.size;
if ([s respondsToSelector: #selector(scale)]) {
CGFloat scale = s.scale;
result = CGSizeMake(result.width * scale, result.height * scale);
}
return result;
}
The resulting value on an iPhone 4 would be { 640.0, 960.0 }.
Here is what I've found out. Since iOS4,
[[UIScreen mainScreen] applicationFrame].size.width;
and
[[UIScreen mainScreen] applicationFrame].size.height;
give measurements in "points", not "pixels". For everything else, pixels=points, but for the iPhone4, each point has 4 pixels. Normal images are scaled in the iPhone4, so each pixel in the image is mapped onto a point. This means that the iPhone4 can run iPhone apps without a noticeable change.
The "apple" way to add "hi-res" images that take advantage of the iPhone's greater resolution is to replace ".png" with "#2x.png" in the image file name, and double the pixel density (effectively, just the width&height) in the image. Importantly, don't change the way the image is referred to in your code.
So if you have "img.png" in your code, iPhone4 will load the "img#2x.png" image if it is available.
The problem with this is that, if you are trying to develop a Universal app, and include separate images for all the different possible screen resolutions/pixel densities, your app will get bloated pretty quick.
A common solution to this problem is to pull all the required images of the 'net. This will make your binary nice and small. On the negative side, this will eat into your user's internet quota, and it will really annoy users who don't have wifi--especially if your app has no other reason to use the 'net (and you don't say your app needs the 'net in your app store description).
Fortunately, I have found another way. Often, when you scale down an image, the iPhone4 is clever enough to utilise the increased pixel density of the scaled image. For example, you might have:
UIButton *myButton = [[UIButton alloc] initWithFrame:CGRectMake(0, 0, 100.0, 50.0)];
[indicatorButton setBackgroundImage:
[UIImage imageNamed:#"buttonImage.png"] forState:UIControlStateNormal];
Now if buttonImage.png is 200x100, it will be perfectly well behaved on everything. Similarly, if you start with a nice 640x960 (pixel) image that displays quite nicely on the iPad and you scale it down to a 320x480 image for smaller screens, using something like:
+ (UIImage*)imageWithImage:(UIImage*)image newX:(float)newX newY:(float)newY{
CGSize newSize=CGSizeMake((CGFloat)newX, (CGFloat)newY);
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0,0,newX,newY)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
It should display quite nicely on the iPhone4. The trick is not to double-up on your scaling. For example, if you do something like:
UIButton *myButton = [[UIButton alloc] initWithFrame:CGRectMake(0, 0, 100.0, 50.0)];
[indicatorButton setBackgroundImage:
[Utilities imageWithImage:[UIImage imageNamed:#"buttonImage.png"] newX:100 newY:50] forState:UIControlStateNormal];
Then you'll have lost your pixel density and your image will look all "pixely" on the iPhone4.
Finally, if you want to detect if you are an iPhone4 (not really necessary if you use the above technique), the following code may be useful:
+(bool)imAnIphone4{
return([[UIScreen mainScreen]respondsToSelector:#selector(scale)] && [UIScreen mainScreen].scale==2);
}
Did you rename the images as img.png#2x? And you should enable retina display in your code.
Even if you set simulator to retina display BUT the code is not retina display enabled, the graphics displayed out would be 320x480.