i use itouch 4G has my device and i use simulator-4.2
i will just draw a rectangle as an example. i use Quartz-2d to draw
- (void)drawRect:(CGRect)rect {
// Get a graphics context, saving its state
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
// Reset the transformation
CGAffineTransform t0 = CGContextGetCTM(context);
t0 = CGAffineTransformInvert(t0);
CGContextConcatCTM(context,t0);
// Draw a green rectangle
CGContextBeginPath(context);
CGContextSetRGBFillColor(context, 0,1,0,1);
CGContextAddRect(context, CGRectMake(0,0,320,480));
CGContextClosePath(context);
CGContextDrawPath(context,kCGPathFill);
CGContextRestoreGState(context);
}
i run it in the simulator, the whole screen becomes green, then i run this on my device, only the quarter of the screen becomes green, in order to make the whole screen green on my device, i have to draw a larger rectangle
CGContextAddRect(context, CGRectMake(0,0,640,960));
seem like my device has twice resolution than the simulator,
how can i fix this?
The Retina display on the iPhone is twice the resolution of the previous generation of phones. Your simulator is probably running using the 'iPhone' device rather than the 'iPhone4' device. You can switch in the Hardware | Device menu.
You can get the current scale of view you're rendering to with
[self.layer contentsScale]
then scale your dimensions accordingly.
Related
Result of my code:
Basically, what the issue is, the transparent part of my image are not blending correctly with what is drawn before it. I know I can do a
if(alpha<=0){discard;}
in the fragment shader, the only issue is I plan on having a ton of fragments and don't want the if statement for each fragment on mobile devices.
Here is my code related to alpha, and depth testing:
var gl = canvas.getContext("webgl2",
{
antialias : false,
alpha : false,
premultipliedAlpha: false,
}
);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.GREATER);
Also, these are textured gl.POINTS I am drawing. If I change the order the two images are drawn in the buffer, the problem doesn't exist. They will be dynamically rotating during the program's runtime so this is not an option.
It's not clear what your issue is without more code but it looks like a depth test issue.
Assuming I understand correctly you're drawing 2 rectangles? If you draw the red one before the blue one then depending on how you have the depth test setup the blue one will fail the depth test when the X area is drawn.
You generally solve this by sorting what you draw, making sure to draw things further away first.
For a grid of "tiles" you can generally sort by walking the grid itself in the correct direction instead of "sorting"
On the other hand, if all of your transparency is 100% draw or not draw then discard has its advantages and you can draw front to back. The reason is because in that case drawing front to back, the pixel drawn (not discarded) by the red quad will be rejected when drawing the blue quad by the depth test. The depth test is usually optimized to happen before running the fragment shader for a certain pixel. If the depth test says the pixel will not be drawn then no reason to even run the fragment shader for that pixel, time saved. Unfortunately as soon as you have any transparency that is not 100% opaque or 100% transparent then you need to sort and draw back to front. Some of these issues are covered in this article
A few notes:
you mentioned mobile devices and you mentioned WebGL2 in your code sample. There is no WebGL2 on iOS
you said you're drawing with POINTS. The spec says only POINTS of 1 pixel in size are required. It looks like you're safe up to points of size 60 but to be safe it's generally best to draw with triangles as there are other isses with points
you might also be interested in sprites with depth
I'm scaling a QImage, currently as so (I understand there may be more elegant ways):
img.setDotsPerMeterX(img.dotsPerMeterX() * 2);
img.setDotsPerMeterY(img.dotsPerMeterY() * 2);
When I save:
img.save("c:\\users\\me\\desktop\\test.jpg");
and subsequently open and print the image from Photoshop, it is, as expected, half of the physical size of the same image without the scaling applied.
However, when I simply print the scaled QImage, directly from code:
myQPainter.drawImage(0,0,img);
the image prints at the original physical size - not scaled to half the physical size.
I'm using the same printer in each case; and, as far as I can tell, the settings are consistent between both print cases.
Am I misunderstanding something? The end goal is to successfully scale and print the scaled image directly from code.
If we look at the documentation for setDotsPerMeterX it states: -
Together with dotsPerMeterY(), this number defines the intended scale and aspect ratio of the image, and determines the scale at which QPainter will draw graphics on the image. It does not change the scale or aspect ratio of the image when it is rendered on other paint devices.
I expect that the reason for the latter case being the original size is that the image has already been drawn before the call to the functions to set the dots per meter. Or alternatively, set the dots per meter on the original image, before loading its content.
In contrast, when saving, it appears that the device which you save to is copying the values you have set for the dots per meter on the image, then drawing to that device.
I would expect creating a second QImage, setting its dots per meter, then copying from the original to that second image, it would achieve the result you're looking for. Alternatively, you may just be able to set the dots per meter before loading the content on the original QImage.
I need to draw 2 centimeters long line on screen on an Adobe Air application. I don't know how to do it!
Explanation:
I am getting parameters from another application say x centimeters, and that parameter is in centimeters.
I need to draw a circle exactly x centimeters from the top of the screen.
best regards
If I remember correctly, you won't be able to do it on desktop since AIR always returns 72DPI for the screen (I may be incorrect on that point, however). It is fairly easy to do on mobile, though, assuming AIR returns the proper DPI (retina iPads did not return the correct DPIs prior to AIR 3.3, I believe).
Basically, you convert inches to pixels simply by multiplying by the DPI.
var dpi:Number = Capabilities.screenDPI; //unnecessary to save local version, just easier to reference
var heightCM:Number = 5;
var widthCM:Number = 5;
var widthPixels:Number, heightPixels:Number;
var heightIn:Number = cmToInches( heightCM );
var widthIn:Number = cmToInches( widthCM );
widthPixels = widthIn * dpi;
heightPixels = heightIn * dpi;
function cmToInches( value:Number ):Number {
return value * .393701;
}
That will take a size (I built it for height and width, but you can adapt it to your needs) in centimeters, convert it to inches, and then convert it to pixels. You'd obviously want to turn that into a neat static Util method, but it would do the trick.
If you want, I created a Flex application last year to try and understand how AIR handles DPI differences. It just draws a red rectangle to a specific size on screen using on-screen sliders to determine the size (in inches). I don't have it here at work, but I could post the code when I get home.
Again, I do not believe this will work in desktop applications due to AIR always reporting 72 DPI. I hope I am wrong, but I do not believe I am.
I am writing Qt (4.8.1 on Ubuntu 12.04) application that stores it's main window geometry between sessions. I noticed that if widget is maximized, qt is not storing it's non-maximized geometry. Obviously I would like my application to return to it's non-maximized size just the same if it was closed/started since last maximization. In
Main window is not maximized and has geometry X;
maximize main window;
save window geometry (using QWidget::saveGeometry) to config file;
close my application;
start it again;
load geometry from config file
Restore (un-maximize? ;)
After step 6 window gets maximized (as expected), but after step 7 it returns to some internal default size (i. e. one set while designing form in QtCreator), not to last non-maximized geometry X.
Is this desired behavior? Or is it impossible/difficult to implement inside qt?
Is it because when maximized, the non maximized size is remembered by window manager and not qt (at least on linux)?
You do not need to save the geometry when the window is maximized to begin-with.
To get your required functionality just modify your steps as follows:
Main window is not maximized and has geometry X;
Save Geometry X also left-top position of window as QPoint Y
maximize main window;
Do NOT save geometry (You can figure if window state is maximized using QWidget::isMaximized() before saving to config file). Save a new isMaximised state value to config file instead.
close my application;
start it again;
Before you call window->show() apply a window->resize(lastQSizeSavedinSettingsofNonMaximisedState) and a window->move(lastQPointSavedinSettingsofNonMaximisedState)
Now check the isMaximised state value from config and if true, just call QWidget::showMaximized() else just QWidget::show()
Now when you restore window size, you should have your desired functionality :)
Something to keep in mind when working with window size/states.
Always provide a fallback geometry and position in-case the last saved positions are out of bounds when the application is started and values you try to restore are not within the screen bounds anymore. (This helps catering for cases where someone changes resolution / monitor count / monitor position / virtual desktops)
4. Do NOT save geometry (You can figure if window state is maximized using QWidget::isMaximized() before saving to config file). Save a new isMaximised state value to config file instead.
Another problem here is: A window will not just be maximized/minimized based on its position on the screen but based on where the greater part of the window is. If 80% of the window is on screen1 but the upper left corner is on screen 2, the maximized window will be on screen1.
Still, your idea is the best one. After over an hour of google (using QT5), I now use:
writeSettings:
settings.setValue("pos", pos());
if(!isMaximized())
settings.setValue("size", size());
settings.setValue("maximized", isMaximized());
readSettings:
if(settings.contains("pos"))
move(settings.value("pos").toPoint());
if(settings.contains("size"))
resize(settings.value("size").toSize());
if(settings.value("maximized").toBool())
setWindowState(windowState() | Qt::WindowMaximized);
I think the issue you're having is coming from a number of geometries and sizes being readable and set-able for a QWidget. Specifically, you might want to look at the differences between normalGeometry, height, width, maximumHeight, maximumWidth, minimumHeight, minimumWidth etc.
I have a couple of AS3 games that I want to run in a flex mobile app. I put my original games into a single library and then added it to my mobile app. So far so good.
The problem I get is when the game starts it doesn't scale itself to the StageScaleMode.SHOW_ALL I have specified in the games.
I'm starting the games like this:
var game:MyGame = new MyGame();
var container:UIComponent = new UIComponent();
addElement(container);
container.addChild(game);
this.actionBarVisible = false;
I tried setting the same scale option to the stage in my mxml but it doesn't change anything.
Any ideas?
Thanks.
Mobile device screens have varying screen densities, or DPI (dots per inch). You can specify the DPI value as 160, 240, or 320, depending on the screen density of the target device. When you enable automatic scaling, Flex optimizes the way it displays the application for the screen density of each device.
For example, suppose that you specify the target DPI value as 160 and enable automatic scaling. When you run the application on a device with a DPI value of 320, Flex automatically scales the application by a factor of 2. That is, Flex magnifies everything by 200%.
To specify the target DPI value, set it as the applicationDPI property of the tag or tag in the main application file:
<s:ViewNavigatorApplication xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
firstView="views.HomeView"
applicationDPI="160">
If you choose to not auto-scale your application, you must handle the density changes for your layout manually, as required.
Devices can have different screen sizes or resolutions and different DPI values, or densities.
Resolution is the number of pixels high by the number of pixels wide: that is, the total number of pixels that a device supports.
DPI is the number of dots per square inch: that is, the density of pixels on a device’s screen. The term DPI is used interchangeably with PPI (pixels per inch).
applicationDPI (if setted) specifies the target DPI of the application. Flex automatically applies a scale factor to fit good on another devices with different DPI value.
Capabilities.screenDPI is the specific DPI value of the current device.
runtimeDPI is similar to Capabilities.screenDPI. This value is the current device DPI rounded to one of the constants defined by the DPIClassification class (160, 240 and 320 DPI).
If you want to know the real dimensions (width and height) of a component on the current screens you need to work with the scale factor as:
var scaleFactor:Number = runtimeDPI / applicationDPI;
var currentComponentSize:int =componentSize.height * scaleFactor;
If you haven’t access to applicationDPI and runtimeDPI values, you can calculate the scaleFactor manually using Capabilities.screenDPI as:
// Copy the applicationDPI setted in your application. ie:
var _applicationDPI:int = 160;
var _runtimeDPI:int;
if(Capabilities.screenDPI < 200)
_runtimeDPI = 160;
else if(Capabilities.screenDPI >=200 && Capabilities.screenDPI < 280)
_runtimeDPI = 240
else if (Capabilities.screenDPI >=280)
_runtimeDPI = 320;
var scaleFactor:Number = _runtimeDPI / _applicationDPI;
var currentComponentSize:int =componentSize.height * scaleFactor;
http://www.francescoflorio.info/?p=234