how to draw 2 centimeters long line on screen in adobe air - apache-flex

I need to draw 2 centimeters long line on screen on an Adobe Air application. I don't know how to do it!
Explanation:
I am getting parameters from another application say x centimeters, and that parameter is in centimeters.
I need to draw a circle exactly x centimeters from the top of the screen.
best regards

If I remember correctly, you won't be able to do it on desktop since AIR always returns 72DPI for the screen (I may be incorrect on that point, however). It is fairly easy to do on mobile, though, assuming AIR returns the proper DPI (retina iPads did not return the correct DPIs prior to AIR 3.3, I believe).
Basically, you convert inches to pixels simply by multiplying by the DPI.
var dpi:Number = Capabilities.screenDPI; //unnecessary to save local version, just easier to reference
var heightCM:Number = 5;
var widthCM:Number = 5;
var widthPixels:Number, heightPixels:Number;
var heightIn:Number = cmToInches( heightCM );
var widthIn:Number = cmToInches( widthCM );
widthPixels = widthIn * dpi;
heightPixels = heightIn * dpi;
function cmToInches( value:Number ):Number {
return value * .393701;
}
That will take a size (I built it for height and width, but you can adapt it to your needs) in centimeters, convert it to inches, and then convert it to pixels. You'd obviously want to turn that into a neat static Util method, but it would do the trick.
If you want, I created a Flex application last year to try and understand how AIR handles DPI differences. It just draws a red rectangle to a specific size on screen using on-screen sliders to determine the size (in inches). I don't have it here at work, but I could post the code when I get home.
Again, I do not believe this will work in desktop applications due to AIR always reporting 72 DPI. I hope I am wrong, but I do not believe I am.

Related

Display two object same real distance (e.g. inches) apart across different browers / screen sizes

I'm developing a psychology experiment in the browser. In order to keep the same viewing angle across people, I want to display two characters around 5 inches apart on the screen.
Is there any way to detect the real size of the monitor being used, and using the screen resolution and DPI, render the two objects the same real width apart? (I will only allow people that have real computers, e.g. not mobile)
I heard detecting real size may not be possible, if true, and assuming people will report to me the real size of their monitor, is this possible?
I'm using HTML5 Canvas, fwiw. Perhaps resizing this canvas w.r.t to the resolution and DPI is a solution.
No, unfortunately. The browser will always report 96 DPI. Without actual DPI you cannot produce exact measures in other units than pixels.
Even if you could the browser would only reflect the system DPI which in itself is just an approximation.
You need to "calibrate" for the individual device providing a mechanism to do so, e.g. a scale that can be varied and measure on screen. When it measures 1 inch you know how many pixels covers that inch, and then this value can be used as a scale for the rest.
Example on how to get screen DPI via "calibration"
var ctx = document.querySelector("canvas").getContext("2d"),
rng = document.querySelector("input");
ctx.translate(0.5, 0.5);
ctx.font = "16px sans-serif";
ctx.fillStyle = "#c00";
render(+rng.value);
rng.onchange = rng.oninput = function() {render(+this.value)}; // update on change
function render(v) {
ctx.clearRect(-0.5, -0.5, 600, 300);
ctx.strokeRect(0, 0, v, v);
ctx.fillText(v + " PPI", 10, 20);
// draw marks which should be 4 inches apart
ctx.fillRect(0, 0, 3, 150);
ctx.fillRect(96*4 * (v / 96), 0, 3, 150); // assuming 96 DPI base resolution
ctx.fillText("------ Should be 4 inches apart ------", 50, 140);
}
<label>Adjust so square below equals 1 inch:
<input type=range value=96 min=72 max=145></label>
<canvas width=600 height=300></canvas>
This example can of course be extended to take a vertical parameter as well as considering pixel aspect ratio (ie. retina displays) and scale.
You need to then build all your objects and graphics using a base scale, for example 96 DPI. Then use the relationship between the actual DPI and 96 DPI as a scale factor for all positions and sizes.

is it proper scaling?

I am trying to scale the bitmaps, and I would like them to work on all the android phones. I have seen this code on this website, but I am not sure how and where to apply this code into my app.:
Code:
Bitmap image1, pic1;
image1 = BitmapFactory.decodeResource(getResources(), R.drawable.image1);
float xScale = (float) canvas.getWidth() / image1.getWidth();
float yScale = (float) canvas.getHeight() / image1.getHeight();
float scale = Math.max(xScale, yScale); //selects the larger size to grow the images by
scale = (float) (scale*1.1); //this allows for ensuring the image covers the whole screen.
scaledWidth = scale * image1.getWidth();
scaledHeight = scale * image1.getHeight();
pic1 = Bitmap.createScaledBitmap(image1, (int)scaledWidth, (int)scaledHeight, true);
then I also saw this code from this website
http://developer.sonymobile.com/2011/06/27/how-to-scale-images-for-your-android-application/
The last update seems to be back in 2011.
Could someone please explains which method is better for the new API such as API10 or >?
I know there is API19 now, so I am sure there must be a new version of these and better.
Would you mind to share your knowledge and kindness with us please?
Thank you very much in advance.
Your code will scale the image, but Android will do a decent job of scaling images to fit the space allotted several ways.
If you place images in folders as described by this link: and making sure each folder has the right size image, the system will pick up the right sized image based on the device it is running on. There are also scaling parameters you can apply to an image (see ImageView.ScaleType for more info)
Also, this link describes how to efficiently load images to conserve memory. This is very useful for loading images into less memory than the full size would require as well as how to load multiple images via asynchronous tasks.

PyGame: Load tile image information just before blitting?

My project is a large map that can be panned around containing "info spots" that can be clicked. For now I use four large images, each spans 5000x5000 pixels (so total map size is 20'000x20'000 pixel). On my AMD Phenom 9950 Quad-Core with 8GB RAM and an NVIDIA GeForce 610 this takes a certain while to load while it's quite fast afterwards when panning the image. I tried tiling it up but there's no visible enhancement in loading speed as the image still has to be loaded completely before it's separated into tiles.
The only way to have some real improvement on speed and memory usage would be, to only load those parts of the map image that are actually shown.
Does PyGame offer any way of doing so? I'm thinking of a "theoretical" tile map which contains the needed x and y values of each tile (I group them a little, less to compute each frame) and a theoretical image information (like: which image and which position therein). Only when a tile comes near the visible part of the screen, its image information is loaded, otherwise it remains a number and string value.
Would this make any sense? Is there any way to achieve this?
The only way to accomplish this with Pygame would be to break the images themselves into smaller squares (say 250x250), and then, as the user pans, just get the current topleft x,y coordinates, as well as the screen size, and load any tiles that fit into that screen or around the buffer edge into memory, and clear out any others that are outside that range. The math will be fairly straightforward unless you add support for rotation and/or zooming. I would name the tiles after their location as a multiple of the square size (for example the tile at 500, 500 would be named 2-2.png). This will make it very trivial to generate the tile name that you need to load at each location - take the current x/y coordinates, integer divide by 250, subtract the buffer tile amount, and then loop by your screen width integer divided by 250 plus 1 plus the buffer tile amount for each row. Do that loop for each column.
After reading #lukevp 's reply, I was interested and tried this:
http://imgur.com/Q1N2UtU
Go get this image and create a folder named 'test_data'. Now place this image, code outside of test_data folder and run. The output would be cropped figures named as per their order (It's a bit off on the edges as the image is 1920 * 1080). You can try it with your custom size tho. Also note that I am on ubuntu so take care to appropriate paths.
OUTPUT: http://imgur.com/2v4ucGI Final link: http://imgur.com/a/GHc9l
import pygame, os
pygame.init()
original_image = pygame.image.load('test_pic.jpg')
x_max = 1920
y_max = 1080
current_x = 0
current_y = 0
count = 1
begin_surf = pygame.Surface((x_max,y_max), flags = pygame.SRCALPHA)
begin_surf.blit(original_image,(0,0))
cropped_surf = pygame.Surface((100,100),flags=pygame.SRCALPHA)
while current_y + 100 < y_max:
while current_x + 100 < x_max:
cropped_surf.blit(begin_surf, (0,0), (current_x, current_y,100,100))
pygame.image.save(cropped_surf, os.path.join("test_data", str(count) + '.jpg'))
current_x += 100
count += 1
current_x = 0
current_y += 100
Would now actually be working to load those images and span them as he said.

Scaling issue with mobile flex app

I have a couple of AS3 games that I want to run in a flex mobile app. I put my original games into a single library and then added it to my mobile app. So far so good.
The problem I get is when the game starts it doesn't scale itself to the StageScaleMode.SHOW_ALL I have specified in the games.
I'm starting the games like this:
var game:MyGame = new MyGame();
var container:UIComponent = new UIComponent();
addElement(container);
container.addChild(game);
this.actionBarVisible = false;
I tried setting the same scale option to the stage in my mxml but it doesn't change anything.
Any ideas?
Thanks.
Mobile device screens have varying screen densities, or DPI (dots per inch). You can specify the DPI value as 160, 240, or 320, depending on the screen density of the target device. When you enable automatic scaling, Flex optimizes the way it displays the application for the screen density of each device.
For example, suppose that you specify the target DPI value as 160 and enable automatic scaling. When you run the application on a device with a DPI value of 320, Flex automatically scales the application by a factor of 2. That is, Flex magnifies everything by 200%.
To specify the target DPI value, set it as the applicationDPI property of the tag or tag in the main application file:
<s:ViewNavigatorApplication xmlns:fx="http://ns.adobe.com/mxml/2009"
xmlns:s="library://ns.adobe.com/flex/spark"
firstView="views.HomeView"
applicationDPI="160">
If you choose to not auto-scale your application, you must handle the density changes for your layout manually, as required.
Devices can have different screen sizes or resolutions and different DPI values, or densities.
Resolution is the number of pixels high by the number of pixels wide: that is, the total number of pixels that a device supports.
DPI is the number of dots per square inch: that is, the density of pixels on a device’s screen. The term DPI is used interchangeably with PPI (pixels per inch).
applicationDPI (if setted) specifies the target DPI of the application. Flex automatically applies a scale factor to fit good on another devices with different DPI value.
Capabilities.screenDPI is the specific DPI value of the current device.
runtimeDPI is similar to Capabilities.screenDPI. This value is the current device DPI rounded to one of the constants defined by the DPIClassification class (160, 240 and 320 DPI).
If you want to know the real dimensions (width and height) of a component on the current screens you need to work with the scale factor as:
var scaleFactor:Number = runtimeDPI / applicationDPI;
var currentComponentSize:int =componentSize.height * scaleFactor;
If you haven’t access to applicationDPI and runtimeDPI values, you can calculate the scaleFactor manually using Capabilities.screenDPI as:
// Copy the applicationDPI setted in your application. ie:
var _applicationDPI:int = 160;
var _runtimeDPI:int;
if(Capabilities.screenDPI < 200)
_runtimeDPI = 160;
else if(Capabilities.screenDPI >=200 && Capabilities.screenDPI < 280)
_runtimeDPI = 240
else if (Capabilities.screenDPI >=280)
_runtimeDPI = 320;
var scaleFactor:Number = _runtimeDPI / _applicationDPI;
var currentComponentSize:int =componentSize.height * scaleFactor;
http://www.francescoflorio.info/?p=234

Comparing bitmap data in AS3 pixel for pixel

I am looking for a fairly simple image comparison method in AS3. I have taken an image from a web cam (with no subject) passed it in to bitmap data, then a second image is taken (this time with a subject) to compare this data, from these two images I would like to create a mask from the pixels that match on both bitmaps. I have been scratching my head for a while, and I am not really making any progress. Could any one point me in the right direction for pixel comparison method, something like getPixel32()
Cheers
Jono
use compare to create a difference between the two and then use treshold to extract the parts that interest you.
edit: actually it is pretty straight forward. the trick is to apply the threshold multiple times per channel using the mask parameter (otherwise the comparison only makes little sense, since 0x010000 (which is almost black) is consider greater than 0x0000FF (which is anything but black)). here's how:
var dif:BitmapData;//your original bitmapdata
var mask:BitmapData = new BitmapData(dif.width, dif.height, true, 0);
const threshold:uint = 0x20;
for (var i:int = 0; i < 3; i++)
mask.threshold(dif, dif.rect, new Point(), ">", threshold << (i * 8), 0xFF000000, 0xFF << (i * 8));
this creates a transparent mask. then the threshold is applied for all three channels, setting the alpha channel to fully opaque where the channels value exceeds the threshold value (you might wanna decrease it).
you can isolate the foreground object ("the guy in front of the webcam") by copying the alpha channel from the mask to the current video image.
one of the problems here is that you want to find if a pixel has ANY change to it, and if it does then to convert that pixel to another color (for masking). Unfortunately, a webcam's quality isn't great so even if your scene does not change at all the bitmapdata coming from the webcam will change slightly. Therefor, when your subject steps into frame...you will get pixel changes for the subject...but also noise in other areas due to lighting changes or camera quality. What you'll need to do is write a function that analyzes the result of a bitmapdaya.compare() for change in area's larger than _____ to determine if there is enough change to warrant an actual object being there. That will help remove noise and make your mask more accurate.

Resources