how to develop demo application with FPS rate using kick.js? - frame-rate

I'm very interested in Kick.js. To convince my professor to use this framework, I want to develop an application which I can load/code custom 3D model using kick.js and should be able to add more objects. I should also able to print FPS to check the variations in FPS as I add more 3D objects on canvas. I'm new to graphic programming, I neither have knowledge on shader programming nor openGL. Being a newbie, how can I start diving into this framework?
The following steps I wanted to implement (Suggest me if I go wrong):
Develop simple demo using kick.js loading single cube or sphere or teapot on canvas.
Able to see the fps as I change the camera angles.
Later I should be able to add more triangles(Models) on the canvas of same type (ex: Teapot) and able to compare the fps with single teapot one.
Am i approaching the right way or please suggestions needed. The provided tutorials neither of them having FPS demo. Please someone HELP ME. I really liked the features stated on homepage but I don't know how can I implement them in my demo.

Assuming that Kick.js has a "render" callback or something similar that's invoked for each frame you want to render (and you know the time between frames, or the absolute time since program start), it's fairly simple to calculate your frame rate.
The method I've used before is: pick a sample rate (I like 250ms so it updates 4 times a second), and count how many frames have executed every 250ms. When you hit 250ms, update the on-screen frame rate counter variable and start counting again.
timeSinceLastFPSUpdate += millisecondsSinceLastFrame;
framesSinceLastFPSUpdate++;
if timeSinceLastFPSUpdate > 250:
timeSinceLastFPSUpdate = 0
fps = framesSinceLastFPSUpdate * (1000 / 250); // convert "frames per 250ms" to "frames per 1s"
framesSinceLastFPSUpdate = 0;
print fps to screen;
You can play around with different sample rates or use a different frame rate calculation method to get the timer to be more "accurate" (to better find frame rate dips) but it sounds like you're looking for something that's less accurate and is just giving you a reasonable idea of the overall complexity of rendering rather than frame to frame dips.

Related

Data propagation time (single link)

this is not Home work!
I preparing my self to test in Networking :
i had this questen in the midterm test and i got half the points i cant figure it out
in this question i got reciver-sender connection.
link data rate is R(b/s)
Packet size is S(b)
Window Size is W(pkts)
Link distance is D(m)
medium propagation speed is p(m/s)
i need to write the utilisation Formula using those Letters
this is what i wrote:
Tp-Propagation time is D/p ===>this got me a big X on test page
i wrote that frame transmition (Tt) time is window size in bits (W*S)
divided by link Data Rate i.e (W*S)/R
thats why the formula is U=Tt/(Tt+2*tp)==>((W*S)/R)/(((W*S)/R)+2*(D/p))
(again X)
i guess somthing is wrong with the Propagation time calculation .
all the slides refaring to sliding window do not mention the utilisation
in referande to distance and propagation delay
i would love some help with this.
thank you .
It depends on how propagation time is supposed to be measured1, but the general formula is:
Propagation time = (Frame Serialization Time) + (Link Media Delay)
Link Media Delay = D/p
Frame Serialization Time = S/R
I don't see the relevance of TCP's sliding window in this question yet; sometimes professors include extra data to discern how well you understand the principles.
END-NOTES:
Does the professor measure propagation time at the bit-level or at the frame-level? My answer assumes it is a frame-level calculation (measured from first bit transmitted until the last bit in the frame is received), so I include frame serialization time.

fourier transform to transpose key of a wav file

I want to write an app to transpose the key a wav file plays in (for fun, I know there are apps that already do this)... my main understanding of how this might be accomplished is to
1) chop the audio file into very small blocks (say 1/10 a second)
2) run an FFT on each block
3) phase shift the frequency space up or down depending on what key I want
4) use an inverse FFT to return each block to the time domain
5) glue all the blocks together
But now I'm wondering if the transformed blocks would no longer be continuous when I try to glue them back together. Are there ideas how I should do this to guarantee continuity, or am I just worrying about nothing?
Overlap the time samples for each block by half so that each block after the first consists of the last N/2 samples from the previous block and N/2 new samples. Be sure to apply some window to the samples before the transform.
After shifting the frequency, perform an inverse FFT and use the middle N/2 samples from each block. You'll need to adjust the final gain after the IFFT.
Of course, mixing the time samples with a sine wave and then low pass filtering will provide the same shift in the time domain as well. The frequency of the mixer would be the desired frequency difference.
For speech you might want to look at PSOLA - this is a popular algorithm for pitch-shifting and/or time stretching/compression which is a little more sophisticated than the basic overlap-add method, but not much more complex.
If you need to process non-speech samples, e.g. music, then there are several possibilities, however the overlap-add FFT/modify/IFFT approach mentioned in other answers is probably the best bet.
Found this great article on the subject, for anyone trying it in the future!
You may have to find a zero-crossing between the blocks to glue the individual wavs back together. Otherwise you may find that you are getting clicks or pops between the blocks.

Normalizing FFT Data for Human Hearing

The typical FFT for audio looks pretty similar to this, with most of the action happening on the far left side
http://www.flight404.com/blog/images/fft.jpg
He multiplied it by a partial sine wave to get it to the bottom, but the article isn't too specific on this part of it. It also seems like a "good enough" modification of the dataset, rather than one based on some property. I understand that human hearing is better suited to the higher frequencies, thus, most music will have amplified bass and attenuated treble so that both sound to us as being of relatively equal strength.
My question is what modification needs to be done to the FFT to compensate for this standard falloff?
for(i = 0; i < fft.length; i++){
fft[i] = fft[i] * Math.log(i + 1); // does, eh, ok but the high
// end is still not really "loud"
// enough
}
EDIT ::
http://en.wikipedia.org/wiki/Equal-loudness_contour
I came across this article, I think it might be the direction to head in, but there still might be some property of an FFT that needs to be counteracte.
First, are you sure you want to do this? It makes sense to compensate for some things, like the microphone response not being flat, but not human perception. People are used to hearing sounds with the spectral content that the sounds have in the real world, not along perceptual equal loudness curves. If you play a sound that you've modified in the way you suggest it would sound strange. Maybe some people like the music to have enhanced low frequencies, but this is a matter of taste, not psychophysics.
Or maybe you are compensating for some other reason, for example, taking into account the poorer sensitivity to lower frequencies might enhance a compression algorithm. Is this the idea?
If you do want to normalize by the equal loudness curves, one should note that most of the curves and equations are in terms of sound pressure level (SPL). SPL is the log of the square of the waveform amplitude, so when you work with the FFTs, it's probably easiest to work with their square (the power specta). (Or, of course, you could compensate in other ways by, say, multiplying by sqrt(log(i+1)) in your equation above -- assuming that the log was an approximation of the inverse equal-loudness curve.)
I think the equal loudness contour is exactly the right direction.
However, its shape depends on the absolute pressure level.
In other words the sensitivity curve of our hearing changes with sound pressure.
There is no "correct normalization" if you have no information about absolute levels.
If this is a problem depends on what you want to do with the data.
The loudness contour is standardized in ISO 226 but this document is not freely available for download. It should be in a decent university library though.
Here is another source for
loudness contours
So you are trying to raise the level of the high end frequencies? Sounds like a high pass filter with a minimum multiplier might work, so that you don't attenuate the low frequency signals too much. Pick up a good book on filter design, maybe monkey around with this applet
In the old days of first samplers, this is before MOTU Boost people :) it wasn't FFT but simple (Fairlight or Roland it first I think) Normalisation done on the original or resulting time-domain signal (if you are doing beat slicing, recycle-style); can't you do that? Or only go for the FFT after you compensate to counteract for it?
Seems like a two phase procedure otherwise, I'd personally leave FFT as is for the task..

What is the fastest way to draw an Image on another image?

I have 3 Bitmap point .
Bitmap* totalCanvas = new Bitmap(400, 300, PixelFormat32bppARGB); // final canvas
Bitmap* bottomLayer = new Bitmap(400, 300,PixelFormat32bppARGB); // background
Bitmap* topLayer = new Bitmap(XXX); // always changed.
I will draw complex background on bottomLayer. I don't want to redraw complex background on totalCanvas again and again, so I stored it in bottomLayer.
TopLayer changed frequently.
I want to draw bottomLayer to totalCanvas. Which is the fastest way?
Graphics canvas(totalCanvas);
canvas.DrawImage(bottomLayer, 0, 0); step1
canvas.DrawImage(topLayer ,XXXXX); step2
I want step1 to be as fast as possible. Can anyone give me some sample?
Thanks very much!
Thanks for unwind's answer. I write the following code:
Graphics canvas(totalCanvas);
for (int i = 0; i < 100; ++i)
{
canvas.DrawImage(bottomLayer, 0,0);
}
this part takes 968ms... it is too slow...
Almost all GDI+ operations should be implemented by the driver to run as much as possible on the GPU. This should mean that a simple 2D bitmap copy operation is going to be "fast enough", for even quite large values of "enough".
My recommendation is the obvious one: don't sweat it by spending time hunting for a "fastest" way of doing this. You have formulated the problem very clearly, so just try implementing it that clearly, by doing it as you've outlined in the question. Then you can of course go ahead and benchmark it and decide to continue the hunt.
A simple illustration:
A 32 bpp 400x300 bitmap is about 469 KB in size. According to this handy table, an Nvidia GeForce 4 MX from 2002 has a theoretical memory bandwidth of 2.6 GB/s. Assuming the copy is done in pure "overwrite" mode, i.e. no blending of the existing surface (which sounds right, as your copy is basically a way of "clearing" the frame to the copy's source data), and an overhead factor of four just to be safe, we get:
(2.6 * 2^30 / (4 * 469 * 2^10)) = 1453
This means your copy should run at 1453 FPS, which I happily assume to be "good enough".
If at all possible (and it looks like it from your code), using DrawImageUnscaled will be significgantly faster than DrawImage. Or if you are using the same image over and over again, create a TextureBrush and use that.
The problem with GDI+, is that for the most part, it is unaccelerated. To get the lightening fast drawing speeds you really need GDI and BitBlt, which is a serious pain in the but to use with GDI+, especially if you are in Managed code (hard to tell if you are using managed C++ or straight C++).
See this post for more information about graphics quickly in .net.

Showing too much 'skin' detection in software

I am building an ASP.NET web site where the users may upload photos of themselves. There could be thousands of photos uploaded every day. One thing my boss has asked a few time is if there is any way we could detect if any of the photos are showing too much 'skin' and automatically move flag these as 'Adults Only' before the editors make the final decision.
Your best bet is to deal with the image in the HSV colour space (see here for rgb - hsv conversion). The colour of skin is pretty much the same between all races, its just the saturation that changes. By dealing with the image in HSV you can simply search for the colour of skin.
You might do this by simply counting the number of pixel within a colour range, or you could perform region growing around pixel to calculate the size of the areas the colour.
Edit: for dealing with grainy images, you might want to perform a median filter on the image first, and then reduce the number of colours to segment the image first, you will have to play around with the settings on a large set of pre-classifed (adult or not) images and see how the values behave to get a satisfactory level of detection.
EDIT: Heres some code that should do a simple count (not tested it, its a quick mashup of some code from here and rgb to hsl here)
Bitmap b = new Bitmap(_image);
BitmapData bData = b.LockBits(new Rectangle(0, 0, _image.Width, _image.Height), ImageLockMode.ReadWrite, b.PixelFormat);
byte bitsPerPixel = GetBitsPerPixel(bData.PixelFormat);
byte* scan0 = (byte*)bData.Scan0.ToPointer();
int count;
for (int i = 0; i < bData.Height; ++i)
{
for (int j = 0; j < bData.Width; ++j)
{
byte* data = scan0 + i * bData.Stride + j * bitsPerPixel / 8;
byte r = data[2];
byte g = data[1];
byte b = data[0];
byte max = (byte)Math.Max(r, Math.Max(g, b));
byte min = (byte)Math.Min(r, Math.Min(g, b));
int h;
if(max == min)
h = 0;
else if(r > g && r > b)
h = (60 * ((g - b) / (max - min))) % 360;
else if (g > r && g > b)
h = 60 * ((b - r)/max - min) + 120;
else if (b > r && b > g)
h = 60 * ((r - g) / max - min) + 240;
if(h > _lowerThresh && h < _upperThresh)
count++;
}
}
b.UnlockBits(bData);
Of course, this will fail for the first user who posts a close-up of someone's face (or hand, or foot, or whatnot). Ultimately, all these forms of automated censorship will fail until there's a real paradigm-shift in the way computers do object recognition.
I'm not saying that you shouldn't attempt it nontheless; but I want to point to these problems. Do not expect a perfect (or even good) solution. It doesn't exist.
I doubt that there exists any off-the-shelf software that can determine if the user uploads a naughty picture. Your best bet is to let users flag images as 'Adults Only' with a button next to the picture. (Clarification: I mean users other than the one who uploaded the picture--similar to how posts can be marked offensive here on StackOverflow.)
Also, consider this review of an attempt to do the same thing in a dedicated product: http://www.dansdata.com/pornsweeper.htm.
Link stolen from today's StackOverflow podcast, of course :).
We can't even write filters that detect dirty words accurately in blog posts, and your boss is asking for a porno detector? CLBUTTIC!
I would say your answer lies in crowdsourcing the task. This almost always works and tends to scale very well.
It doesn't have to involve making some users into "admins" and coming up with different permissions - it can be as simple as to enable an "inappropriate" link near each image and keeping a count.
See the seminal paper "Finding Naked People" by Fleck/Forsyth published in ECCV. (Advanced).
http://www.cs.hmc.edu/~fleck/naked.html
Interesting question from a theoretical / algorithmic standppoint. One approach to the problem would be to flag images that contain large skin-colored regions (as explained by Trull).
However, the amount of skin shown is not a determinant of an offesive image, it's rather the location of the skin shown. Perhaps you can use face detection (search for algorithms) to refine the results -- determine how large the skin regions are relative to the face, and if they belong to the face (perhaps how far below it they are).
I know either Flickr or Picasa has implemented this. I believe the routine was called FleshFinder.
A tip on the architecture of doing this:
Run this as a windows service separate from the ASP.NET Pipeline, instead of analyzing images in real time, create a queue of new images that are uploaded for the service to work through.
You can use the normal System.Drawing stuff if you want, but if you really need to process a lot of images, it would be better to use native code and a high performance graphics library and P/invoke the routine from your service.
As resources are available, process images in the background and flag ones that are suspicious for editors review, this should prune down the number of images to review significantly, while not annoying people who upload pictures of skin colored houses.
I would approach the problem from a statistical standpoint. Get a bunch of pictures that you consider safe, and a bunch that you don't (that will make for a fun day of research), and see what they have in common. Analyze them all for color range and saturation to see if you can pick out characteristics that all of the naughty photos, and few of the safe ones have.
Perhaps the Porn Breath Test would be helpful - as reported on Slashdot.
Rigan Ap-apid presented a paper at WorldComp '08 on just this problem space. The paper is allegedly here, but the server was timing out for me. I attended the presentation of the paper and he covered comparable systems and their effectiveness as well as his own approach. You might contact him directly.
I'm afraid I can't help point you in the right direction, but I do remember reading about this being done before. It was in the context of people complaining about baby pictures being caught and flagged mistakenly. If nothing else, I can give you the hope that you don't have to invent the wheel all by yourself... Someone else has been down this road!
CrowdSifter by Dolores Labs might do the trick for you. I read their blog all the time as they seem to love statistics and crowdsourcing and like to talk about it. They use amazon's mechanical turk for a lot of their processing and know how to process the results to get the right answers out of things. Check out their blog at the very least to see some cool statistical experiments.
As mentioned above by Bill (and Craig's google quote) statistical methods can be highly effective.
Two approaches you might want to look into are:
Neural Networks
Multi Variate Analysis (MVA)
The MVA approach would be to get a "representative sample" of acceptable pictures and of unacceptable pictures. The X data would be an array of bytes from each picture, the Y would be assigned by you as a 1 for unacceptable and a 0 for acceptable. Create a PLS model using this data. Run new data against the model and see how well it predicts the Y.
Rather than this binary approach you could have multiple Y's (e.g. 0=acceptable, 1=swimsuit/underwear, 2=pornographic)
To build the model you can look at open source software or there are a number of commercial packages available (although they are typically not cheap)
Because even the best statistical approaches are not perfect the idea of also including user feedback would probably be a good idea.
Good luck (and worst case you get to spend time collecting naughty pictures as an approved and paid activity!)

Resources