I can no longer find any trace of copyTexImage2D in the specification of webgl2 : https://www.khronos.org/registry/webgl/specs/latest/2.0/
A few months ago I had asked the question of how to make a float-texture copy. With webgl version 1.0 this was not possible with copyTexImage2D (float type is not supported)
So I made a texture copy by building a simple shader.
I imagined that the restriction on the float32 type was lifted with webgl2. But I do not find any occurrence of the word "copyTexImage2D" in the specification of webgl 2.
How does it work? The specification of webgl2 gives only the novelties or new polymorphism functions since webgl1 ?
In short, with webgl2, is a more efficient method to copy a texture ?
(In my slow, very slow, understanding of webgl2 I have not yet addressed the interesting novelty of feedback)
WebGL2s spec just adds to WebGL1. From the WebGL2 spec right near the beginning
This document should be read as an extension to the WebGL 1.0 specification. It will only describe the differences from 1.0.
Similarly it also says
The remaining sections of this document are intended to be read in conjunction with the OpenGL ES 3.0 specification (3.0.4 at the time of this writing, available from the Khronos OpenGL ES API Registry). Unless otherwise specified, the behavior of each method is defined by the OpenGL ES 3.0 specification.
So, copyTexImage2D is still there.
Your blitFramebuffer solution works though
Ok i find a solution: blitFramebuffer :
let texture1 be the texture which we want to copy in texture2. We have already two framebuffer copieFB and FBorig.
copieFB have a color attachment to texture2,
FBorig have a color attachment to texture1.
gl.bindFramebuffer ( gl.DRAW_FRAMEBUFFER, copieFB );
gl.bindFramebuffer ( gl.READ_FRAMEBUFFER, FBorig );
gl.readBuffer ( gl.COLOR_ATTACHMENT0 );
gl.blitFramebuffer(0, 0, PVS, PVS,0, 0, PVS, PVS,gl.COLOR_BUFFER_BIT, gl.NEAREST);
old solution :
gl.bindFramebuffer( gl.FRAMEBUFFER , copieFB);
gl.viewport(0, 0, PVS, PVS);
gl.useProgram(copieShader);
gl.uniform1i(copieShader.FBorig,TEXTURE1);
gl.drawArrays(gl.POINTS , 0 , NBRE);
the gain is some FPS.
copyTex[Sub]Image2D works with floats in WebGL2 with the EXT_color_buffer_float extension.
I'll note that this also works in WebGL1 with the extensions:
OES_texture_half_float and EXT_color_buffer_half_float[1] for float16s
OES_texture_float and WEBGL_color_buffer_float[1] for float32s
Note the sometimes-confusing differences:
WEBGL_color_buffer_float is for WebGL1, and enables only RGBA32F (RGBA/FLOAT for textures)
EXT_color_buffer_half_float is for WebGL1, and enables only RGBA16F (RGBA/HALF_FLOAT_OES for textures)
EXT_color_buffer_float is for WebGL2, and enables R/RG/RGBA 16F and 32F, as well as R11F_G11F_B10F
(see the WebGL Extension Registry for more info on extensions)
blitFramebuffer also does work on WebGL2, though you'll need EXT_color_buffer_float to allow float framebuffers to be complete.
[1]: EXT_color_buffer_half_float and WEBGL_color_buffer_float are not yet offered in Chrome, though enabling OES_texture_[half_]float might be enough. On Chrome, verify on each machine that checkFramebufferStatus returns FRAMEBUFFER_COMPLETE.
Related
In Qt3D certain properties of rendered objects are not just simply set on the renderer, but they are globally (per view) or locally (on the material of a rendered object) added to the renderPasses - or so is my comprehension at least. (I'm using PySide2 - but the code is almost the same in C++)
For example when adding a geometry-renderer and using its primitive type point (Qt3DRender.QGeometryRenderer.Point) instead of rendering triangle-faces it displays the points of the geometry.
Here is an example figure with the default type.
The same only showing the points (renderer.setPrimitiveType(Qt3DRender.QGeometryRenderer.Points))
Hard to guess, but here the point-size has been already been changed - using the following code:
material = Qt3DExtras.QPhongMaterial(e)
for t in material.effect().techniques():
for rp in t.renderPasses():
pointSize = Qt3DRender.QPointSize(rp)
pointSize.setSizeMode(Qt3DRender.QPointSize.SizeMode.Fixed)
pointSize.setValue(5.0)
rp.addRenderState(pointSize)
According to the documentation the same mechanism can be used to change the line-width when rendering the object with Lines (LineStrip) as primitive type. Adding
lineWidth = Qt3DRender.QLineWidth(rp)
lineWidth.setValue(5.)
lineWidth.setSmooth(True)
rp.addRenderState(lineWidth)
does not change the line-width.
Why? Where do I need to add QLineWidth? Is it the material I chose which ignores the QLineWidth-state?
I'm fighting with similar problems at the moment. I tried to reproduce the behaviour with Qt3D line width test. When setting format version to 4.6 with CoreProfile, the maximum of linewidth seems to be 1 (or equivalently width=3 displayed by the line test).
It might be possible that this is the maximum supported range.
See:
https://www.khronos.org/registry/OpenGL-Refpages/gl4/html/glLineWidth.xhtml
opengl glLineWidth() doesn't change size of lines
Note: I deliberately chose version 4.6 as that is the supported openGL version on my environment.
I ran into the same issue. It appears the problem is caused by Qt3DExtras::Qt3DWindow, which constructs a QSurfaceFormat with an OpenGL core profile. The glLineWidth function is not supported in the core profile.
Unfortunately there is no way to pass a QSurfaceFormat to Qt3DWindow. Setting a new format after the window is created also does not work.
The only way around this is to write your own window class with a QSurfaceFormat in compatibility mode. For example:
setSurfaceType(QSurface::OpenGLSurface);
QSurfaceFormat format = QSurfaceFormat::defaultFormat();
format.setVersion(4, 3);
format.setProfile(QSurfaceFormat::CompatibilityProfile);
format.setDepthBufferSize(24);
format.setSamples(4);
format.setStencilBufferSize(8);
setFormat(format);
QSurfaceFormat::setDefaultFormat(format);
Fortunately Qt3DExtras::Qt3DWindow does not actually contain a lot of functionality and you can easily write a similar class with the QSurfaceFormat changes mentioned above.
You can find the original source here for reference:
https://code.woboq.org/qt5/qt3d/src/extras/defaults/qt3dwindow.cpp.html
So far, using Wolfram System Modeler 4.3 and 5.1 the following minimal example would compile without errors:
model UnitErrorModel
MyComponent c( hasUnit = "myUnit" );
block MyComponent
parameter String hasUnit = "1";
output Real y( unit = hasUnit );
equation
y = 10;
end MyComponent;
end UnitErrorModel;
But with the new release of WSM 12.0 (the jump in version is due to an alignment with the current release of Wolfram's flagship Mathematica) I am getting an error message:
Internal error: Codegen.getValueString: Non-constant expression:c.hasUnit
(Note: The error is given by WSMLink'WSMSimulate in Mathematica 12.0 which is running System Modeler 12.0 internally; here asking for the "InternalValues" property of the above model since I have not installed WSM 12.0 right now).
Trying to simulate the above model in OpenModelica [OMEdit v. 1.13.2 (64-bit)] reveals:
SimCodeUtil.mo: 8492:9-8492:218]: Internal error Unexpected expression (should have been handled earlier, probably in the front-end. Unit/displayUnit expression is not a string literal: c.hasUnit
So it seems that to set the unit attribute I cannot make use of a variable that has parameter variability? Why is this - after all shouldn't it suffice that the compiler can hard-wire the unit when compiling for runtime (after all the given model will run without any error in WSM 4.3 and 5.1)?
EDIT: From the answer to an older question of mine I had believed that at least final parameters might be used to set the unit-attribute. Making the modification final (e.g. c( final hasUnit = "myUnit" ) does not resolve the issue.
I have been given feedback on Wolfram Community by someone from Wolfram MathCore regarding this issue:
You are correct in that it's not in violation with the specification,
although making it a constant makes more sense since you would
invalidate all your static unit checking if you are allowed to change
the unit after building the simulation. We filed an issue on the
specification regarding this (Modelica Specification Issue # 2362).
So, MatheCore is a bit ahead of the game in proposing a Modelica specification change that they have already implemented. ;-)
Note: That in Wolfram System Modeler (12.0) using the annotation Evaluate = true will not cure the problem (cf. the comment above by #matth).
As a workaround variables used to set the unit attribute should have constant variability, but can nevertheless by included in user dialogs to be interactively changed using annotation(Dialog(group = "GroupName")).
I am trying to use a tree based map in Coq, specifically Coq.FSets.FMapAVL.
I found this 4 year old question: Finite map example
Looking at the standard lib documentation referenced in the comments, I see this note:
NB: This file is here only for compatibility with earlier version of FSets and FMap. Please use Structures/Orders.v directly now.
What does this mean? When I google "Structures.v" or "Orders.v" I always end up at other documentation pages with related deprecation warnings.
What is the proper, non-deprecated way to use an FMap in Coq 8.6?
Since the OrderedTypeEx module is deprecated, we won't use Nat_as_OT defined in it.
There is Nat_as_OT in OrdersEx (just a synonym for PeanoNat.Nat), which is useful for our purposes.
Unfortunately, the following code
Require Import Coq.Structures.OrdersEx.
Module M := FMapAVL.Make Nat_as_OT.
won't work, because signatures OrderedType.OrderedType (currently used by FMapAVL) and Orders.OrderedType are not compatible.
However, the OrdersAlt module contains functors, which allow to build a module of one type from another. In this case, we are interested in Backport_OT. The following code shows how to use FMapAVL.Make, the rest of the code can be copied from the linked question:
From Coq Require Import
FSets.FMapAVL Structures.OrdersEx Structures.OrdersAlt.
Module backNat_as_OT := Backport_OT Nat_as_OT.
Module M := FMapAVL.Make backNat_as_OT.
The situation with FMapAVL was explained by Pierre Letouzey in this Coq-Club post:
the transition between old-style OrderedType and the new one isn't
finished yet (some work remain to be done on FMaps for instance),
and we cannot simply remove the old-style OrderedType.
In an effort to better understand RSA I've been fooling around with the source code for GunPG 1.4, specifically the RSA implementation in the rsa.c file. As the title says, I can't figure out where the padding is happening.
So typically in RSA, padding is done right before the encryption and is taken off during the decryption. Encryption first starts around line 409 where we see
int
rsa_encrypt( int algo, MPI *resarr, MPI data, MPI *pkey )
{
RSA_public_key pk;
if( algo != 1 && algo != 2 )
return G10ERR_PUBKEY_ALGO;
pk.n = pkey[0];
pk.e = pkey[1];
resarr[0] = mpi_alloc( mpi_get_nlimbs( pk.n ) );
public( resarr[0], data, &pk );
return 0;
}
That seems easy, it's giving data to "public" function higher up on line 220. Public is responsible for calculating the important (c = m^e mod n) process. That all looks like:
static void
public(MPI output, MPI input, RSA_public_key *pkey )
{
if( output == input ) { /* powm doesn't like output and input the same */
MPI x = mpi_alloc( mpi_get_nlimbs(input)*2 );
mpi_powm( x, input, pkey->e, pkey->n );
mpi_set(output, x);
mpi_free(x);
}
else
mpi_powm( output, input, pkey->e, pkey->n );
}
Wait a second...now it looks like public is passing the job of that calculation off to mpi_powm() located in the mpi-pow.c file. I'll spare you the details but that function gets really long.
Somewhere in all of this some sort of PKCS#1 padding and unpadding (or something similar) is happening but I can't figure out where for the life of me. Can anyone help me see where the padding happens?
In an effort to better understand RSA I've been fooling around with the source code for GnuPG 1.4, specifically the RSA implementation in the rsa.c file.
Since you’re looking at the older (< 2.0) stuff anyway, and since it’s only for learning purposes, I would rather advise you to check out “ye olde rsaref.c from gnupg.org” where the padding is implemented in a pretty obvious way.
… some sort of PKCS#1…
To be exact, GnuPG uses PKCS #1 v1.5 (specified in RFC 4880).
Can anyone help me see where the padding happens?
Hmmm, let’s see if I can wrap that up somewhat logically. GnuGP pads according to PKCS #1 v1.5, so it just adds random pad to satisfy length requirements.
If you take a look at the cipher/pubkey.c file (which includes the rsa.h file in its head), you’ll notice a pubkey_table_s struct which defines a list of elements that define the key. For padding reasons, random bytes are appended to that list (better: after that struct). It’s done that way because those random bytes can easily be stripped by looking for the end of the list. Keeping a long story short, that’s where random.c probably starts to make a bit more sense to you. Now, all that stuff (and a whole lot more) is compiled into a lib called libcipher… which in itself is compiled to be used by functions that add the padding and handle the RSA stuff the way you expected it. In the end, the compiled executables use the functions libcipher provides to take care of the padding – depending on the individual need for padding.
So what you currently expect to find in 1 or 2, maybe 3 files is actually spread out across more than half a dozen files… which I regard not to be the best base for your learning efforts. As said, for reference purposes, I’ld go for the old rsaref.c they once started out with.
Not sure if this actually provides all the details you wanted to get, but it should give you a first good heads-up… hope it helps.
GPG 1.4 doesn't use any padding at all. It encrypts the raw session key.
I'm making a Turbo Pascal 7.0 program for my class, it has to be on Graphic Mode.
A message pops up
BGI Error: Graphics not initialized (use InitGraph).
I'm already using InitGraph and graph.tpu and I specified the route as "C:\TP7\BGI".
My S.O is Windows 7 and I'm using DosBox 0.74, I already tried to paste all the files from the folder BGI into BIN.
What should I do?
Since dos doesn't have system graphic drivers, the BGI functions as such for BP7.
So in short, use a BGI suitable for your videocard. The ones supplied with BP7 are very old, there are newer, VESA ones that you could try.
Afaik 3rd party BGI needs to be registered explicitly in code though.
At first I have had this "missing Graph.tpu"- ... and later the "Use Initgraph"-issue too.
After hours trying (and reading some not politeful comments in the internet) I finally got Turbo Pascal 7 succesfully running (in Windows 10, x64). In summary I want to share "some secrets":
install the "TP(WDB)-7.3.5-Setup.msi" (comes from clever people in Vietnam)
make sure, that there's the CORRECT PATH to the "BGI"-directory in your program-code. For example:
driver := Detect;
InitGraph (driver, modus, 'c:\TPWDB\BGI');
(By the way: This is ALL, what's there to do with "Initgraph".)
make sure, that in TP7 under "Options" --> "Directories" are the CORRECT PATHS both to "C:\TPWDB\UNITS" and Your actual working dir e.g. "C:\TPWDB\myPrograms"
THAT's IT.
Annotations: The "Graph.TPU" (usually) is already in "UNITS" (together with "Graph3.tpu" by the way).
Hazzling around old driver's isn't needed even... :)
Just the correct paths... :)
Hope, that can help ...