tween library in flex mobile app - apache-flex

I want to tween the alpha of a picture in a flex mobile app. I've tried tweenlite
but it runs too fast. Is there any optimised way of doing it, or linking in objective C
code ?
thanks

With the TweenLite library you can control the length of the tween, either using the number of frames or using a time length.
In your tweenProperties object, use the useFrames keyword to specify whether the tween time span specifies frames or seconds.
The second parameter of the 'to' or 'from' tweens specifies the length.
So, create an alpha tween that tweens over 10 frames, it should look like this:
tweenProperties = new Object();
tweenProperties.alpha = yourNewAlphaHere;
tweenProperties.useFrames = true;
TweenLite.to(myObjectToTween, 10,tweenProperties);
To create an alpha tween that tweens over 10 seconds, it would be like this:
tweenProperties = new Object();
tweenProperties.alpha = yourNewAlphaHere;
tweenProperties.useFrames = false;
TweenLite.to(myObjectToTween, 10,tweenProperties);
Depending on what you're doing, you may want to consider TweenNano for mobile apps, since it is the smallest footprint.

Related

Sprite Kit NPC random movement in a platform game

I've looked around Stack Overflow for a solution but have not found one to a problem I have with random NPC movement. Essentially, what I have coded up to now, is a simple 2D platformer game using Sprite Kit: there's a separate class for the NPC object. I initialize it in my GameScene (SKScene) no problem & so far it's behaving with the physicsWorld I have set up properly. Now I'm at the part where it simply needs to move randomly in any direction. I've set up the boundaries & have made it move with things like SKActions, utilizing things like CGPointMake, that would move the NPC randomly as needed, have it wait a little bit in that location & resume movement. BOOL's helped this process. However, I had difficulty getting the sprite to look left when moving left & looking right when moving right (looking up & down is not needed at all). So I found a way in a book by using Vector's. I set up a method in the NPC class which is used in the GameScene:
-(void)moveToward:(CGPoint)targetPosition
{
CGPoint targetVector = CGPointNormalize(CGPointSubtract(targetPosition, self.position));
targetVector = CGPointMultiplyScalar(targetVector, 150); //150 is interpreted as a speed: the larger the # the faster the NPC moves.
self.physicsBody.velocity = CGVectorMake(targetVector.x, targetVector.y); //Velocity vector measured in meters per second.
/*SPRITE DIRECTION*/
[self faceCurrentDirection]; //Every time NPC begins to move, it will face the appropriate direction due to this method.
}
Now all of this works. But the issue at hand is calling this moveToward method appropriately in the update method. The 1st thing I tried was this:
-(void)update:(NSTimeInterval)currentTime
{
/*Called before each frame is rendered*/
if (!npcMoving)
{
SKAction *moving = [SKAction runBlock:^{ npcMoving = YES }]; //THIS IS THE CULPRIT!
SKAction *generate = [SKAction runBlock:^{ [self generateRandomDestination]; }]; //Creates a random CGFloat X & CGFloat Y.
SKAction *moveTowards = [SKAction runBlock:^{ _newLocation = CGPointMake(fX, fY);
[_npc moveToward:_newLocation]; }]; //Moves NPC to that random location.
SKAction *wait = [SKAction waitForDuration:4.0 withRange:2.0]; //NPC will wait a little...
[_npc runAction:[SKAction sequence:#[moving, generate, moveTowards, wait]] completion:^{ npcMoving = NO; }]; //...then repeat process.
}
}
The vector method 'moveToward' requires 'update' method to be present for NPC movement to happen. I turn this off with 'npcMoving = YES' in the beginning in hopes that the NPC will move to targeted location & start the process again. This is not the case. If I remove SKAction with 'npcMoving = YES', the 'update' method calls upon the entire sequence of above SKActions every frame, which in turn doesn't move my NPC far. It simply has it change targeted location every frame, in turn creating an 'ADHD' NPC. Could someone please recommend what to do? I absolutely need to retain the vector movement for the directional properties & other future things but I am at a loss on how to properly implement this with the 'update' method.
Actions perform a task over time. If your npcMoving flag is false, you run an action sequence every frame, which means over 10 frames you will have 10 action sequences running simultaneously. That will cause undefined behavior.
Next, even if you were to stop the existing sequence and run it anew, running an action every frame where at least one action has a duration is practically pointless. Because then that action with duration will not be able to complete its task in the given time because it'll be replaced the next frame.
Summary: actions with duration are unsuitable for tasks that require adjustment every frame.
Solutions:
perform tasks by changing the actor's properties (ie position etc) as/when needed (ie every frame)
decide on a task for the actor, then run the corresponding action sequence for that task and wait for it to end before you decide upon a new task

I calculated an average in crossfilter, but it's wrong. I can't understand why. Can you point out my error?

You can see my gistup here: http://bl.ocks.org/markarios/058f85800d598fc9f2b6
While checking reductio, I calculated the average PPI per device type and the following code is producing the wrong result. The only thing I can think of is that I some how need to use the index of ppi_device_sum[i].key but I'm not sure how to reference that.
Thanks advance for your time!
// What's the average PPI per device?
write("");
write("Average PPI By Type");
for (var i = 0; i < type_device_count.length; i++) {
write(ppi_device_sum[i].key + "(s): " + ppi_device_sum[i].value/type_device_count[i].value);
};
Product Types
tablet(s): 7
desktop monitor(s): 4
laptop(s): 2
smartphone(s): 2
desktop(s): 1
Total PPI by Device Type
tablet(s): 1997
smartphone(s): 770
desktop monitor(s): 444
laptop(s): 350
desktop(s): 108
Average PPI By Type
tablet(s): 285.2857142857143 (correct)
smartphone(s): 192.5 (incorrect, should be 385)
desktop monitor(s): 222 (incorrect, should be 111)
laptop(s): 175 (correct)
desktop(s): 108 (correct)
Probably best to sort your arrays by key before you iterate through them so that their keys are in the same order (JavaScript Array.prototype.sort() method is fine for this).
If you find any problems with the calculations in Reductio, please file an issue on Github. It is very raw at the moment. I will be integrating it into a larger application in the next couple of weeks, so it will be getting more use and eyes on it at that point.
One other note: In your gist you are doing something that makes me think you are working under a very common misconception about how Crossfilter works. It's not exactly intuitive, but this
// calculate the number of device types
var type_count = type.group().reduceCount().size();
// how many of each device are there?
var type_device_count = type.group()
.reduceCount()
.top(type_count);
is doing the same thing as this
// Build the Crossfilter group.
var typeGroup = type.group(); // .reduceCount() is the default
// calculate the number of device types
var type_count = typeGroup.size(); // Now redundant
// how many of each device are there?
var type_device_count = typeGroup.top(Infinity); // Returns all groups
The later is the better way to do things because once you've created a Crossfilter group, that group will be updated when new data is added to the Crossfilter and when you filter on other dimensions. So typeGroup.size() and typeGroup.top(Infinity) will return different results as the contents and filters on your Crossfilter change. Keeping these groups updated uses resources, so you want to create as few dimensions and groups as possible to accomplish your task.

how to develop demo application with FPS rate using kick.js?

I'm very interested in Kick.js. To convince my professor to use this framework, I want to develop an application which I can load/code custom 3D model using kick.js and should be able to add more objects. I should also able to print FPS to check the variations in FPS as I add more 3D objects on canvas. I'm new to graphic programming, I neither have knowledge on shader programming nor openGL. Being a newbie, how can I start diving into this framework?
The following steps I wanted to implement (Suggest me if I go wrong):
Develop simple demo using kick.js loading single cube or sphere or teapot on canvas.
Able to see the fps as I change the camera angles.
Later I should be able to add more triangles(Models) on the canvas of same type (ex: Teapot) and able to compare the fps with single teapot one.
Am i approaching the right way or please suggestions needed. The provided tutorials neither of them having FPS demo. Please someone HELP ME. I really liked the features stated on homepage but I don't know how can I implement them in my demo.
Assuming that Kick.js has a "render" callback or something similar that's invoked for each frame you want to render (and you know the time between frames, or the absolute time since program start), it's fairly simple to calculate your frame rate.
The method I've used before is: pick a sample rate (I like 250ms so it updates 4 times a second), and count how many frames have executed every 250ms. When you hit 250ms, update the on-screen frame rate counter variable and start counting again.
timeSinceLastFPSUpdate += millisecondsSinceLastFrame;
framesSinceLastFPSUpdate++;
if timeSinceLastFPSUpdate > 250:
timeSinceLastFPSUpdate = 0
fps = framesSinceLastFPSUpdate * (1000 / 250); // convert "frames per 250ms" to "frames per 1s"
framesSinceLastFPSUpdate = 0;
print fps to screen;
You can play around with different sample rates or use a different frame rate calculation method to get the timer to be more "accurate" (to better find frame rate dips) but it sounds like you're looking for something that's less accurate and is just giving you a reasonable idea of the overall complexity of rendering rather than frame to frame dips.

why the difference between bitmap.Save with ImageFormat and ImageCodecInfo is so big in .net?

I'm experimenting with image re-sizing in asp.net. Actual re-sizing code aside, I am wondering why there is such a big difference between bitmap's Save overloads
method 1
ImageCodecInfo jpgEncoder =
ImageCodecInfo.GetImageDecoders()
.First(c => c.FormatID == ImageFormat.Jpeg.Guid);
Encoder encoder = Encoder.Quality;
EncoderParameters encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(encoder, (long)quality);
bitmap.Save(_current_context.Response.OutputStream,jpgEncoder,encoderParameters)
method 2
bitmap.Save(_current_context.Response.OutputStream,ImageFormat.Jpeg)
So Method 1, at 100 quality, outputs this particular jpeg image at about 250kb. At 90 quality, it drops to about 100kb
Method 2 however, drops the image to about 60kb, which is a huge difference and with no visible difference as well.
I can't seem to find anywhere why the difference is so big, MSDN has zero details on these two overloads.
Any insight is appreciated. Thanks
Looking at the ImageCodeInfo / Encoder objects which don't seem to provide a way to extract the settings out. I would assume that by default it's setting the Quality to 100 on the save.
Without looking more into the Windows Imaging stuff it's really hard to say.
You could try doing your code with the Default save (Method2) , and the Method 1 with 100 and see if they are the same. it's most likely that way.
http://msdn.microsoft.com/en-us/library/system.drawing.imaging.encoder.quality.aspx#Y800

Is it possible to BitBlt directly on to a GDI+ bitmap?

I am trying to BitBlt from an HBITMAP to a GDI+ bitmap. I tried this, but nothing happens:
Bitmap Buffer = New Bitmap(608, 392)
Graphics BufferGraphics = Graphics.FromImage(Buffer);
IntPtr hBufferDC = BufferGraphics.GetHdc();
...
BitBlt(hBufferDC, x, y, width, height, hInputDC, 0, 0, SRCCOPY);
EDIT: Apparently the hDC doesn't work if I acquire it and then much later use it with BitBlt. I needed to make sure the hDC was still valid. This is the solution:
Bitmap Buffer = New Bitmap(608, 392)
Graphics BufferGraphics = Graphics.FromImage(Buffer);
...
IntPtr hBufferDC = BufferGraphics.GetHdc();
BitBlt(hBufferDC, x, y, width, height, hInputDC, 0, 0, SRCCOPY);
BufferGraphics.ReleaseHdc(hBufferDC);
Does anyone know why this change is necessary? Why might it not work to use an hDC that was gotten earlier as in the first example?
Check the sample at the end of this page on pinvoke.net. The additional calls to CreateCompatibleDC and SelectObject might make your sample work.
Alternatively, you can consider using Graphics.DrawImageUnscalled which would allow you to implement your code only on the .Net side and would still offer a pretty good performance.
Update (Due to updated question)
I don't know exactly why the hDC becomes invalid after a while, but according to MSDN you call GetHdc and ReleaseHdc in pairs and group the calls to GDI+ code between them: "Calls to the GetHdc and ReleaseHdc methods must appear in pairs. During the scope of a GetHdc and ReleaseHdc method pair, you usually make only calls to GDI functions."
So according to the documentation, the way you did in your second sample is the way to go and you shouldn't cache and reuse values from the GetHdc method.

Resources