I am trying to BitBlt from an HBITMAP to a GDI+ bitmap. I tried this, but nothing happens:
Bitmap Buffer = New Bitmap(608, 392)
Graphics BufferGraphics = Graphics.FromImage(Buffer);
IntPtr hBufferDC = BufferGraphics.GetHdc();
...
BitBlt(hBufferDC, x, y, width, height, hInputDC, 0, 0, SRCCOPY);
EDIT: Apparently the hDC doesn't work if I acquire it and then much later use it with BitBlt. I needed to make sure the hDC was still valid. This is the solution:
Bitmap Buffer = New Bitmap(608, 392)
Graphics BufferGraphics = Graphics.FromImage(Buffer);
...
IntPtr hBufferDC = BufferGraphics.GetHdc();
BitBlt(hBufferDC, x, y, width, height, hInputDC, 0, 0, SRCCOPY);
BufferGraphics.ReleaseHdc(hBufferDC);
Does anyone know why this change is necessary? Why might it not work to use an hDC that was gotten earlier as in the first example?
Check the sample at the end of this page on pinvoke.net. The additional calls to CreateCompatibleDC and SelectObject might make your sample work.
Alternatively, you can consider using Graphics.DrawImageUnscalled which would allow you to implement your code only on the .Net side and would still offer a pretty good performance.
Update (Due to updated question)
I don't know exactly why the hDC becomes invalid after a while, but according to MSDN you call GetHdc and ReleaseHdc in pairs and group the calls to GDI+ code between them: "Calls to the GetHdc and ReleaseHdc methods must appear in pairs. During the scope of a GetHdc and ReleaseHdc method pair, you usually make only calls to GDI functions."
So according to the documentation, the way you did in your second sample is the way to go and you shouldn't cache and reuse values from the GetHdc method.
Related
I am trying to initialize a two-dimensional array and then filling it up gradually. However whenever I try to initialize it, it gives Out of Memory error.
D = zeros(1000000, 1000000);
Is there any way to resolve the error and get a workaround for this ?
The problem is that an array of this size would take almost 8TB of ram. If you want an array this big where almost all of the elements are 0, you can use spzeros(1000000, 1000000) (defined in SparseArrays).
Different from OpenGL ES 3, without gl.mapBufferRange and gl.bufferSubData (It exists), what is the efficient way to update uniform buffer data in WebGL 2?
For example, a PerDraw Uniform block
uniform PerDraw
{
mat4 P;
mat4 MV;
mat3 MNormal;
} u_perDraw;
gl.bufferSubData exists so it would seem like you create a buffer then create a parallel typedArray. Update the typedArray and call
gl.bufferSubData to copy it into the buffer to do the update and gl.bindBufferRange to use it.
That's probably still very fast. First off all value manipulation stays in JavaScript so there's less overhead of calling into WebGL. If you have 10 uniforms to update it means you're making 2 calls into WebGL instead of 10.
In TWGL.js I generate ArrayBufferViews for all uniforms into a single typed array so for example given your uniform block above you can do
ubo.MV[12] = tx;
ubo.MV[13] = ty;
ubo.MV[14] = tz;
Or as another example if you have a math library that takes an array/typedarray as a destination parameter you can do stuff like
var dest = ubo.P;
m4.perspective(fov, aspect, zNear, zFar, dest);
The one issue I have is dealing with uniform optimization. If I edit a shader, say I'm debugging and I just insert output = vec4(1,0,0,1); return; at the top of a fragment shader and some uniform block gets optimized out the code is going to break. I don't know what the standard way of dealing with this is in C/C++ projects. I guess in C++ you'd declare a structure
struct PerDraw {
float P[16];
float MV[16];
float MNormal[9];
}
So the problem kind of goes away. In twgl.js I'm effectively generating that structure at runtime which means if your code expects it to exist but it doesn't get generated because it's been optimized out then code break.
In twgl I made a function that copies from a JavaScript object to the typed array so I can skip any optimized out uniform blocks which unfortunately adds some overhead. You're free to modify the typearray views directly and deal with the breakage when debugging or to use the structured copy function (twgl.setBlockUniforms).
Maybe I should let you specify a structure from JavaScript in twgl and generate it and it's up to you to make it match the uniform block object. That would make it more like C++, remove one copy, and be easier to deal with when debugging optimizations remove blocks.
I'm experimenting with image re-sizing in asp.net. Actual re-sizing code aside, I am wondering why there is such a big difference between bitmap's Save overloads
method 1
ImageCodecInfo jpgEncoder =
ImageCodecInfo.GetImageDecoders()
.First(c => c.FormatID == ImageFormat.Jpeg.Guid);
Encoder encoder = Encoder.Quality;
EncoderParameters encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(encoder, (long)quality);
bitmap.Save(_current_context.Response.OutputStream,jpgEncoder,encoderParameters)
method 2
bitmap.Save(_current_context.Response.OutputStream,ImageFormat.Jpeg)
So Method 1, at 100 quality, outputs this particular jpeg image at about 250kb. At 90 quality, it drops to about 100kb
Method 2 however, drops the image to about 60kb, which is a huge difference and with no visible difference as well.
I can't seem to find anywhere why the difference is so big, MSDN has zero details on these two overloads.
Any insight is appreciated. Thanks
Looking at the ImageCodeInfo / Encoder objects which don't seem to provide a way to extract the settings out. I would assume that by default it's setting the Quality to 100 on the save.
Without looking more into the Windows Imaging stuff it's really hard to say.
You could try doing your code with the Default save (Method2) , and the Method 1 with 100 and see if they are the same. it's most likely that way.
http://msdn.microsoft.com/en-us/library/system.drawing.imaging.encoder.quality.aspx#Y800
I want to tween the alpha of a picture in a flex mobile app. I've tried tweenlite
but it runs too fast. Is there any optimised way of doing it, or linking in objective C
code ?
thanks
With the TweenLite library you can control the length of the tween, either using the number of frames or using a time length.
In your tweenProperties object, use the useFrames keyword to specify whether the tween time span specifies frames or seconds.
The second parameter of the 'to' or 'from' tweens specifies the length.
So, create an alpha tween that tweens over 10 frames, it should look like this:
tweenProperties = new Object();
tweenProperties.alpha = yourNewAlphaHere;
tweenProperties.useFrames = true;
TweenLite.to(myObjectToTween, 10,tweenProperties);
To create an alpha tween that tweens over 10 seconds, it would be like this:
tweenProperties = new Object();
tweenProperties.alpha = yourNewAlphaHere;
tweenProperties.useFrames = false;
TweenLite.to(myObjectToTween, 10,tweenProperties);
Depending on what you're doing, you may want to consider TweenNano for mobile apps, since it is the smallest footprint.
I have 3 Bitmap point .
Bitmap* totalCanvas = new Bitmap(400, 300, PixelFormat32bppARGB); // final canvas
Bitmap* bottomLayer = new Bitmap(400, 300,PixelFormat32bppARGB); // background
Bitmap* topLayer = new Bitmap(XXX); // always changed.
I will draw complex background on bottomLayer. I don't want to redraw complex background on totalCanvas again and again, so I stored it in bottomLayer.
TopLayer changed frequently.
I want to draw bottomLayer to totalCanvas. Which is the fastest way?
Graphics canvas(totalCanvas);
canvas.DrawImage(bottomLayer, 0, 0); step1
canvas.DrawImage(topLayer ,XXXXX); step2
I want step1 to be as fast as possible. Can anyone give me some sample?
Thanks very much!
Thanks for unwind's answer. I write the following code:
Graphics canvas(totalCanvas);
for (int i = 0; i < 100; ++i)
{
canvas.DrawImage(bottomLayer, 0,0);
}
this part takes 968ms... it is too slow...
Almost all GDI+ operations should be implemented by the driver to run as much as possible on the GPU. This should mean that a simple 2D bitmap copy operation is going to be "fast enough", for even quite large values of "enough".
My recommendation is the obvious one: don't sweat it by spending time hunting for a "fastest" way of doing this. You have formulated the problem very clearly, so just try implementing it that clearly, by doing it as you've outlined in the question. Then you can of course go ahead and benchmark it and decide to continue the hunt.
A simple illustration:
A 32 bpp 400x300 bitmap is about 469 KB in size. According to this handy table, an Nvidia GeForce 4 MX from 2002 has a theoretical memory bandwidth of 2.6 GB/s. Assuming the copy is done in pure "overwrite" mode, i.e. no blending of the existing surface (which sounds right, as your copy is basically a way of "clearing" the frame to the copy's source data), and an overhead factor of four just to be safe, we get:
(2.6 * 2^30 / (4 * 469 * 2^10)) = 1453
This means your copy should run at 1453 FPS, which I happily assume to be "good enough".
If at all possible (and it looks like it from your code), using DrawImageUnscaled will be significgantly faster than DrawImage. Or if you are using the same image over and over again, create a TextureBrush and use that.
The problem with GDI+, is that for the most part, it is unaccelerated. To get the lightening fast drawing speeds you really need GDI and BitBlt, which is a serious pain in the but to use with GDI+, especially if you are in Managed code (hard to tell if you are using managed C++ or straight C++).
See this post for more information about graphics quickly in .net.