How to reflect information about hlsl struct members? - reflection

Using shader reflection in Directx 11 you can get information about individual variables by calling
myVar = myCbuffer->GetVariableByName/Index
But if the variable is a struct object, how to get info about the individual struct members?
Note that I'm not talking about the effects framework but pure hlsl and the reflection API.

Variable's member number is stored in it's type description. Use it for iterating it's members using GetMemberTypeByIndex.
Example:
ID3D11ShaderReflectionConstantBuffer* cb = reflector->GetConstantBufferByIndex( cbIndex );
if ( cb )
{
D3D11_SHADER_BUFFER_DESC cbDesc;
cb->GetDesc( &cbDesc );
if ( cbDesc.Type == D3D11_CT_CBUFFER )
{
for ( unsigned i = 0; i < cbDesc.Variables; ++i )
{
ID3D11ShaderReflectionVariable* var = cb->GetVariableByIndex( i );
D3D11_SHADER_VARIABLE_DESC varDesc;
var->GetDesc( &varDesc );
ID3D11ShaderReflectionType* type = var->GetType();
D3D11_SHADER_TYPE_DESC typeDesc;
type->GetDesc( &typeDesc );
for ( unsigned j = 0; j < typeDesc.Members; ++j )
{
ID3D11ShaderReflectionType* memberType = type->GetMemberTypeByIndex( j );
D3D11_SHADER_TYPE_DESC memberTypeDesc;
memberType->GetDesc( &memberTypeDesc );
}
}
}
}

Use GetMemberByName ("If the effect variable is an structure, use this method to look up a member by name."). If the struct has a member "foo", then...
myCbuffer->GetVariableByName->GetMemberByName("foo")

You could use
ID3D11ShaderReflectionType::GetMemberTypeName
This function will returns the field member name of struct in a CBuffer.
I meet the same question when developing my HLSL reflection program. This function has been tried by myself and get correct result.

Related

Segfault in factory constructed class

I have an issue where a Segfault occurs in random locations but those locations seem to always be the same. The issue also is made more confusing by the fact that the location of the Segfault can change depending on whether I'm using gdb or valgrind to try and solve the issue. This would seem to be a race condition problem but I don't always see any of my destructors being called so I'm not sure why that would be happening. I have not defined my copy constructors which I suppose could be the problem but I would like to understand why that may be.
The tests that I have on the midLevel class to exercise its functionality don't have this problem. Is there something flawed with my basic construction?
My use case is:
In highest level class:
returnObject highLevelClass::performTask( ){
std::shared_ptr< midLevelClass > midClass;
std::vector< someType > dataForClass;
for ( auto it = _someIterate.begin( ); it != _someIterate.end( ); it++ ){
...
buildMidClass( midClass, &dataForClass );
}
...
return returnObject;
}
returnObject highLevelClass::buildMidClass( std::shared_ptr< midLevelClass > &midClass,
std::vector< someType > *dataForClass ){
...
midClass = midLevelClass( _configurationInfo ).create( )
midClass.loadData( dataForClass );
midClass->processData( ); //SOMETIMES IT SEGFAULTS HERE DURING std::vector ALLOCATIONS
...
return returnObject;
}
highLevelClass::~highLevelClass( ){
//SOMETIMES IT SEGFAULTS HERE
return;
}
In the mid-level class:
midLevelClass::loadData( std::vector< someType > *data ){
_data = data; //_data is a std::vector< someType >*
}
std::shared_ptr< midLevelClass > midLevelClass::create( configurationType &_configInfo ){
if ( _configInfo[ "type" ] == childMidLevelClass ){
return std::make_shared< childMidLevelClass >( _configInfo );
}
_error = new errorNode( "create", "The type is not defined" );
return std::make_shared< volumeReconstructionBase >( _config, _error );
}
The answer turned out to be that another std::vector ( that wasn't being accessed by any other part of the code ) was overflowing through a bad access using []. Nothing in gdb or valgrind showed that this was the problem. The answer is probably be careful about using [] to access a std::vector and consider using std::vector::at( ).

Can RLMResults be typecasted to NSArray?

We are trying to integrate Realm into our ios app in an iterative manner. Currently we have a lot of variables of the type NSArray which will ultimately have to be replaced by RLMResults.
But for now I was wondering if the data from Realm db could be loaded into those variables.
Here is an example of one such function :
func preloadData() {
if( realmEnabled )
{
if( self.currentLeftSideBarState == GLOBAL_CUSTOMER_STATE ) {
self.allRelations = Relationship.allObjectsInRealm(relationshipRealm)
} else if( self.currentLeftSideBarState == SINGLE_CUSTOMER_STATE ) {
let rel = Relationship( customers: currentCustomerSelected! )
if rel.realm != nil {
if let rooms = rel.linkingObjectsOfClass( RoomObj.className(), forProperty: "relationship" ) {
self.allRoomsforRelationship = rooms
}
}
}
}
}
Here, allRelations is an RLMResults object while allRoomsForRelationship is an NSArray. This leads to several inconsistencies.
It would be convenient to typecast RLMResults to NSArray
Since RLMResults doesn't inherit from NSArray, casting to an NSArray is dangerous -- you'd lose all type safety. What you may want to look into is whether changing those declarations to id<NSFastEnumerable> makes sense for your application, or else maybe declaring a protocol which has the methods common to both NSArray and RLMResults.

PCL - Global Registration with LUM

I'm working on a registration project, I have a chair with some objects rotating in front of a kinect.
I can have successful pairwise registration, but as expected there is some drift (result in image).
I want to use LUM, in order to have a global minimization of the accumulated error (and then "spread" it across frames), but I end up having the frames floating in the air. (code below the image)
Is this there any obvious mistake in the usage of LUM?
---I use keypoints+features, not blindly feeding LUM with full pointclouds
Why all the examples add one-directional edges and not bi-directional?
PARAM_LUM_centroidDistTHRESH = 0.30;
PARAM_LUM_MaxIterations = 100;
PARAM_LUM_ConvergenceThreshold = 0.0f;
int NeighborhoodNUMB = 2;
int FrameDistLOOPCLOSURE = 5;
PARAM_CORR_REJ_InlierThreshold = 0.020;
pcl::registration::LUM<pcl::PointXYZRGBNormal> lum;
lum.setMaxIterations( PARAM_LUM_MaxIterations );
lum.setConvergenceThreshold( PARAM_LUM_ConvergenceThreshold );
QVector< pcl::PointCloud<pcl::PointXYZRGB>::Ptr > cloudVector_ORGan_P_;
for (int iii=0; iii<totalClouds; iii++)
{
// read - iii_cloud_ORGan_P_
// transform it with pairwise registration result
cloudVector_ORGan_P_.append( iii_cloud_ORGan_P_ );
}
for (size_t iii=0; iii<totalClouds; iii++)
{
pcl::compute3DCentroid( *cloudVector_ORGan_P_[iii], centrVector[iii] );
pcl::IntegralImageNormalEstimation<pcl::PointXYZRGB,pcl::Normal> ne;
//blah blah parameters
//compute normals with *ne*
//pcl::removeNaNFromPointCloud
//pcl::removeNaNNormalsFromPointCloud
pcl::ISSKeypoint3D< pcl::PointXYZRGBNormal, pcl::PointXYZRGBNormal> keyPointDetector;
//blah balh parameters;
//keyPointDetector.compute
//then remove NAN keypoints
pcl::SHOTColorEstimationOMP< pcl::PointXYZRGBNormal,pcl::PointXYZRGBNormal,pcl::SHOT1344 > featureDescriptor;
//featureDescriptor.setSearchSurface( **ful_unorganized_cloud_in_here** );
//featureDescriptor.setInputNormals( **normals_from_above____in_here** );
//featureDescriptor.setInputCloud( **keypoints_from_above__in_here** );
//blah blah parameters
//featureDescriptor.compute
//delete NAN *Feature* + corresp. *Keypoints* with *.erase*
}
for (size_t iii=0; iii<totalClouds; iii++)
{
lum.addPointCloud( KEYptVector_UNorg_P_[iii] );
}
for (size_t iii=1; iii<totalClouds; iii++)
{
for (size_t jjj=0; jjj<iii; jjj++)
{
double cloudCentrDISTANCE = ( centrVector[iii] - centrVector[jjj] ).norm();
if ( (cloudCentrDISTANCE<PARAM_LUM_centroidDistTHRESH && qAbs(iii-jjj)<=NeighborhoodNUMB) ||
(cloudCentrDISTANCE<PARAM_LUM_centroidDistTHRESH && qAbs(iii-jjj)> FrameDistLOOPCLOSURE) )
{
int sourceID;
int targetID;
if (qAbs(iii-jjj)<=NeighborhoodNUMB) // so that connection are e.g. 0->1, 1->2, 2->3, 3->4, 4->5, 5->0
{ // not sure if it helps
sourceID = jjj;
targetID = iii;
}
else
{
sourceID = iii;
targetID = jjj;
}
*source_cloud_KEYpt_P_ = *lum.getPointCloud(sourceID);
*target_cloud_KEYpt_P_ = *lum.getPointCloud(targetID);
*source_cloud_FEATures = *FEATtVector_UNorg_P_[sourceID];
*target_cloud_FEATures = *FEATtVector_UNorg_P_[targetID];
// KeyPoint Estimation
pcl::registration::CorrespondenceEstimation<keyPointTYPE,keyPointTYPE> corrEst;
corrEst.setInputSource( source_cloud_FEATures );
corrEst.setInputTarget( target_cloud_FEATures );
corrEst.determineCorrespondences( *corrAll );
// KeyPoint Rejection
pcl::registration::CorrespondenceRejectorSampleConsensus<pcl::PointXYZRGBNormal> corrRej;
corrRej.setInputSource( source_cloud_KEYpt_P_ );
corrRej.setInputTarget( target_cloud_KEYpt_P_ );
corrRej.setInlierThreshold( PARAM_CORR_REJ_InlierThreshold );
corrRej.setMaximumIterations( 10000 );
corrRej.setRefineModel( true );
corrRej.setInputCorrespondences( corrAll );
corrRej.getCorrespondences( *corrFilt );
lum.setCorrespondences( sourceID, targetID, corrFilt );
} // if
} // jjj
} // iii
lum.compute();
// PCLVisualizer - show this - lum.getConcatenatedCloud()
After many days of experimenting with LUM, I decided to move to another tool for Graph optimization, namely g2o. You can see the result in the image, it's not perfect (see small translational drift # frontal view), but it's reasonable and much better than simple pairwise incremental registration (no very-apparent rotational drift!),
If you are interested, I propose downloading the github version! It's the most up-to-date, while other versions - like this - are outdated, and personally I had some compilation issues, both when compiling the library itself or my source code)

How to get the number of values in an array in .Net?

AX allows arrays to be defined, but while fetching information through the .NET Business Connector, it shows as a single field. E.g: Dimension is set by:
axRec.setField("Dimension[1]","A");
axRec.setField("Dimension[2]","B");
axRec.setField("Dimension[3]","C");
// and so on...
How do I know how many fields "Dimension" have?
AX supports a compile time function dimOf to return the count, but that is not available from .Net!
To rescue comes the DictField class:
X++ code:
DictField df = new DictField(tablenum(CustTable), fieldnum(CustTable, AccountNum));
if (df)
{
print strfmt("The arraySize is %1.", df.arraySize());
}
You can make a X++ utility function, then call that:
static int arraySize(str tableName, str fieldName)
{
DictField df = new DictField(tableName2Id(tableName), fieldName2Id(tableName2Id(tableName), fieldName)));
return df ? df.arraySize() : -1;
}

Define dictionary in protocol buffer

I'm new to both protocol buffers and C++, so this may be a basic question, but I haven't had any luck finding answers. Basically, I want the functionality of a dictionary defined in my .proto file like an enum. I'm using the protocol buffer to send data, and I want to define units and their respective names. An enum would allow me to define the units, but I don't know how to map the human-readable strings to that.
As an example of what I mean, the .proto file might look something like:
message DataPack {
// obviously not valid, but something like this
dict UnitType {
KmPerHour = "km/h";
MiPerHour = "mph";
}
required int id = 1;
repeated DataPoint pt = 2;
message DataPoint {
required int id = 1;
required int value = 2;
optional UnitType theunit = 3;
}
}
and then have something like to create / handle messages:
// construct
DataPack pack;
pack->set_id(123);
DataPack::DataPoint pt = pack.add_point();
pt->set_id(456);
pt->set_value(789);
pt->set_unit(DataPack::UnitType::KmPerHour);
// read values
DataPack::UnitType theunit = pt.unit();
cout << theunit.name << endl; // print "km/h"
I could just define an enum with the unit names and write a function to map them to strings on the receiving end, but it would make more sense to have them defined in the same spot, and that solution seems too complicated (at least, for someone who has lately been spoiled by the conveniences of Python). Is there an easier way to accomplish this?
You could use custom options to associate a string with each enum member:
https://developers.google.com/protocol-buffers/docs/proto#options
It would look like this in the .proto:
extend google.protobuf.FieldOptions {
optional string name = 12345;
}
enum UnitType {
KmPerHour = 1 [(name) = "km/h"];
MiPerHour = 2 [(name) = "mph"];
}
Beware, though, that some third-party protobuf libraries don't understand these options.
In proto3, it's:
extend google.protobuf.EnumValueOptions {
string name = 12345;
}
enum UnitType {
KM_PER_HOUR = 0 [(name) = "km/h"];
MI_PER_HOUR = 1 [(name) = "mph"];
}
and to access it in Java:
UnitType.KM_PER_HOUR.getValueDescriptor().getOptions().getExtension(MyOuterClass.name);

Resources