Wrong "Spacing" and "Origin" of a DICOM series in ITK - dicom

When I read a dicom series with a series reader in itk,
I always find the origin=[0 0 0] and spacing=[1 1 1], the same for all different datasets.
* main function:-
void main()
{
reader = READ_DCM(Input_DCM_Paths[0]);
cout<<" Reading Done!!"<<endl;
cout<< " Origin: " <<reader->GetOutput()->GetOrigin()<< endl;
cout<< " Spacing: " <<reader->GetOutput()->GetSpacing()<< endl;
}
* reader function:-
SeriesReaderType::Pointer READ_DCM (std::string InputFolder)
{
SeriesReaderType::Pointer seriesReader = SeriesReaderType::New();
seriesReader->SetImageIO(itk::GDCMImageIO::New());
itk::GDCMSeriesFileNames::Pointer nameGenerator = itk::GDCMSeriesFileNames::New();
nameGenerator->SetUseSeriesDetails(true);
nameGenerator->SetDirectory(InputFolder);
std::string seriesID = nameGenerator->GetSeriesUIDs().begin()->c_str();
seriesReader->SetFileNames(nameGenerator->GetFileNames(seriesID));
seriesReader->Update();
return seriesReader;
}
* 1st series output in itk:-
* 1st series output in matlab:-
What's worng with my 'series reader' code ??
I followed the "reading part" in this example.

I have the same problem, i use following similar codes to read dicom series, but the output of spacing between slices is sometimes correct BUT not always:
// 1) Read the input series
typedef itk::GDCMImageIO ImageIOType;
typedef itk::GDCMSeriesFileNames InputNamesGeneratorType;
ImageIOType::Pointer gdcmIO = ImageIOType::New();
InputNamesGeneratorType::Pointer inputNames=InputNamesGeneratorType::New();
inputNames->SetInputDirectory( dirPath );
inputNames->AddSeriesRestriction("0020|0013");
// then i select a serie identifier and pass it to the reader
typedef itk::ImageSeriesReader< CTImageType > ReaderType;
ReaderType::Pointer reader = ReaderType::New(); ;
reader->SetImageIO( gdcmIO );
reader->SetFileNames( inputNames->GetFileNames( seriesIdentifier.c_str() ));
reader->UpdateOutputInformation();
--->>> reader->GetOutput()->GetSpacing()[2] is not correct always!!!

ITK/SimpleITK assume that when you provide a series of images the files are in the same order as the slices. For many collections this is not the case and you have to presort the files (more details for python here) based on the Slice Location tag (or .GetOrigin).

Related

NewTek NDI (SDK v5) with Qt6.3: How to display NDI video frames on the GUI?

I have integrated the NDI SDK from NewTek in the current version 5 into my Qt6.3 widget project.
I copied and included the required DLLs and header files from the NDI SDK installation directory into my project.
To test my build environment I tried to compile a simple test program based on the example from "..\NDI 5 SDK\Examples\C++\NDIlib_Recv".
That was also successful.
I was therefore able to receive or access data from my NDI source.
There is therefore a valid frame in the video_frame of the type NDIlib_video_frame_v2_t. Within the structure I can also query correct data of the frame such as the size (.xres and .yres).
The pointer p_data points to the actual data.
So far so good.
Of course, I now want to display this frame on the Qt6 GUI. In other words, the only thing missing now is the conversion into an appropriate format so that I can display the frame with QImage, QPixmap, QLabel, etc.
But how?
So far I've tried calls like this:
curFrame = QImage(video_frame.p_data, video_frame.xres, video_frame.yres, QImage::Format::Format_RGB888);
curFrame.save("out.jpg");
I'm not sure if the format is correct either.
Here's a closer look at the mentioned frame structure within the Qt debug session:
my NDI video frame in the Qt Debug session, after receiving
Within "video_frame" you can see the specification video_type_UYVY.
This may really be the format as it appears at the source!?
Fine, but how do I get this converted now?
Many thanks and best regards
You mean something like this? :)
https://github.com/NightVsKnight/QtNdiMonitorCapture
Specifically:
https://github.com/NightVsKnight/QtNdiMonitorCapture/blob/main/lib/ndireceiverworker.cpp
Assuming you connect using NDIlib_recv_color_format_best:
NDIlib_recv_create_v3_t recv_desc;
recv_desc.p_ndi_recv_name = "QtNdiMonitorCapture";
recv_desc.source_to_connect_to = ...;
recv_desc.color_format = NDIlib_recv_color_format_best;
recv_desc.bandwidth = NDIlib_recv_bandwidth_highest;
recv_desc.allow_video_fields = true;
pNdiRecv = NDIlib_recv_create_v3(&recv_desc);
Then when you receive a NDIlib_video_frame_v2_t:
void NdiReceiverWorker::processVideo(
NDIlib_video_frame_v2_t *pNdiVideoFrame,
QList<QVideoSink*> *videoSinks)
{
auto ndiWidth = pNdiVideoFrame->xres;
auto ndiHeight = pNdiVideoFrame->yres;
auto ndiLineStrideInBytes = pNdiVideoFrame->line_stride_in_bytes;
auto ndiPixelFormat = pNdiVideoFrame->FourCC;
auto pixelFormat = NdiWrapper::ndiPixelFormatToPixelFormat(ndiPixelFormat);
if (pixelFormat == QVideoFrameFormat::PixelFormat::Format_Invalid)
{
qDebug().nospace() << "Unsupported pNdiVideoFrame->FourCC " << NdiWrapper::ndiFourCCToString(ndiPixelFormat) << "; return;";
return;
}
QSize videoFrameSize(ndiWidth, ndiHeight);
QVideoFrameFormat videoFrameFormat(videoFrameSize, pixelFormat);
QVideoFrame videoFrame(videoFrameFormat);
if (!videoFrame.map(QVideoFrame::WriteOnly))
{
qWarning() << "videoFrame.map(QVideoFrame::WriteOnly) failed; return;";
return;
}
auto pDstY = videoFrame.bits(0);
auto pSrcY = pNdiVideoFrame->p_data;
auto pDstUV = videoFrame.bits(1);
auto pSrcUV = pSrcY + (ndiLineStrideInBytes * ndiHeight);
for (int line = 0; line < ndiHeight; ++line)
{
memcpy(pDstY, pSrcY, ndiLineStrideInBytes);
pDstY += ndiLineStrideInBytes;
pSrcY += ndiLineStrideInBytes;
if (pDstUV)
{
// For now QVideoFrameFormat/QVideoFrame does not support P216. :(
// I have started the conversation to have it added, but that may take awhile. :(
// Until then, copying only every other UV line is a cheap way to downsample P216's 4:2:2 to P016's 4:2:0 chroma sampling.
// There are still a few visible artifacts on the screen, but it is passable.
if (line % 2)
{
memcpy(pDstUV, pSrcUV, ndiLineStrideInBytes);
pDstUV += ndiLineStrideInBytes;
}
pSrcUV += ndiLineStrideInBytes;
}
}
videoFrame.unmap();
foreach(QVideoSink *videoSink, *videoSinks)
{
videoSink->setVideoFrame(videoFrame);
}
}
QVideoFrameFormat::PixelFormat NdiWrapper::ndiPixelFormatToPixelFormat(enum NDIlib_FourCC_video_type_e ndiFourCC)
{
switch(ndiFourCC)
{
case NDIlib_FourCC_video_type_UYVY:
return QVideoFrameFormat::PixelFormat::Format_UYVY;
case NDIlib_FourCC_video_type_UYVA:
return QVideoFrameFormat::PixelFormat::Format_UYVY;
break;
// Result when requesting NDIlib_recv_color_format_best
case NDIlib_FourCC_video_type_P216:
return QVideoFrameFormat::PixelFormat::Format_P016;
//case NDIlib_FourCC_video_type_PA16:
// return QVideoFrameFormat::PixelFormat::?;
case NDIlib_FourCC_video_type_YV12:
return QVideoFrameFormat::PixelFormat::Format_YV12;
//case NDIlib_FourCC_video_type_I420:
// return QVideoFrameFormat::PixelFormat::?
case NDIlib_FourCC_video_type_NV12:
return QVideoFrameFormat::PixelFormat::Format_NV12;
case NDIlib_FourCC_video_type_BGRA:
return QVideoFrameFormat::PixelFormat::Format_BGRA8888;
case NDIlib_FourCC_video_type_BGRX:
return QVideoFrameFormat::PixelFormat::Format_BGRX8888;
case NDIlib_FourCC_video_type_RGBA:
return QVideoFrameFormat::PixelFormat::Format_RGBA8888;
case NDIlib_FourCC_video_type_RGBX:
return QVideoFrameFormat::PixelFormat::Format_RGBX8888;
default:
return QVideoFrameFormat::PixelFormat::Format_Invalid;
}
}

Change array dimension when generating a new model CPLEX OPL

I have an optimization model that I want to implement in IBM CPLEX Optimization Studio 12.10.
I wrote the model code in OPL and the first implementation is working. What I would like to do now is to iterate the model multiple times to see how the resolution time changes depending on the dimension of the parameters.
In the .mod file I have defined three sets:
int numSet1=...;
int numSet2=...;
int numSet3=...;
range Set1 = 1..numSet1;
range Set2 = 1..numSet2;
range Set3 = 1..numSet3;
And four parameters:
float Par1[Set1]=...;
float Par2[Set1][Set2]=...;
float Par3[Set1]=...;
float Par4[Set1][Set2][Set3]=...;
In the .dat file, I have defined the initial values for these sets and parameters.
What I would like to do now is to define, in the flow control, a code that allows me to change the dimensions fo the sets, and thus, of the parameters, and save the resolution time for each resolution:
main {
var mod = thisOplModel.modelDefinition;
var dat = thisOplModel.dataElements;
for (var sizenumSet1 = 2; sizenumSet1 <= 10; sizenumSet1 += 2) {
for (var sizenumSet2 = 1; sizenumSet2 <= 5; sizenumSet2 +=1) {
for (var sizenumSet3 = 1; sizenumSet3 <=5; sizenumSet3 +=1) {
var MyCplex = new IloCplex();
var opl = new IloOplModel(mod, MyCplex);
dat.changenumSet1=sizenumSet1;
dat.changenumSet2=sizenumSet2;
dat.changenumSet3=sizenumSet3;
opl.addDataSource(dat);
opl.generate();
if (MyCplex.solve()) {
writeln("Solution: ", MyCplex.getObjValue(),
" / sizeSet1: ", sizenumSet1,
" / sizeSet2: ", sizenumSet2,
" / sizeSet3: ", sizenumSet3,
" / time: ", MyCplex.getCplexTime());
}
opl.end();
MyCplex.end();
}
}
}
}
When I launch this code what I obtain is the following list of errors:
Execution of main failed. Processing OPL model failed
Index out of bound for array Par4(1)(1):3
Scripting runtime error: (in generate) Processing OPL model failed
How can I solve this?
Thank you for your help.
In
dat.changenumSet1=sizenumSet1;
dat.changenumSet2=sizenumSet2;
dat.changenumSet3=sizenumSet3;
you are changing the wrong elements. You should be changing
dat.numSet1=sizenumSet1;
dat.numSet2=sizenumSet2;
dat.numSet3=sizenumSet3;
Moreover, it seems you are missing updates to the Par arrays. These arrays become larger in each iteration, so need to provide more data for them.

ASN1 OBJECT_IDENTIFIER decoding

I have the following ASN1 data
Sequence
Sequence
ObjectIdentifier
Sequence
Sequence
Integer
Integer
Sequence
Integer
Integer
My goal is to get the encoded integer values. My code so far is the following
ByteQueue queue(inputLen);
queue.Put2(input, inputLen, 0, false);
BERSequenceDecoder outer(queue);
BERSequenceDecoder discard(outer); // unnecessary sequence with object_identifier
BERSequenceDecoder obj(discard,
CryptoPP::ASNTag::OBJECT_IDENTIFIER | CryptoPP::ASNIdFlag::UNIVERSAL);
BERSequenceDecoder parent(outer); //BER decode error
for(int i = 0; i < 2; i++) {
BERSequenceDecoder dataSequence(parent);
Integer i1, i2;
i1.BERDecode(dataSequence);
i2.BERDecode(dataSequence);
Problem is, I don't know how to properly get past the object_identifier part, at least I think that is the problem. I'm getting BER decode error on the 4. decoder object.
Also, am I initializing the ByteQueue correctly? this Put2 method doesn't seem like the correct way, but I didn't find any other methods.
ByteQueue queue(inputLen);
queue.Put2(input, inputLen, 0, false);
You could also do something like:
ArraySource as(input, inputLen, false /*pumpAll*/);
as.TransferTo(queue);
Or, if you just want to copy them:
as.CopyTo(queue);
Problem is, I don't know how to properly get past the object_identifier part...
I would probably do something like:
byte b = as.Peek();
if(b == /*some tag*/)
as.Skip(n);
Or:
byte b = as.Peek();
if(b == /*some tag*/)
{
lword length;
bool definiteLength;
if(!BERLengthDecode(as, length, definiteLength))
throw BadParam();
as.Skip(length);
}
The source files with the goodies like above is asn.h and asn.cpp. The others you might be interested in include BERDecodeOctetString and BERDecodeBitString.

Retrieving row count from QSqlQuery, but got -1

I'm trying to get the row count of a QSqlQuery, the database driver is qsqlite
bool Database::runSQL(QSqlQueryModel *model, const QString & q)
{
Q_ASSERT (model);
model->setQuery(QSqlQuery(q, my_db));
rowCount = model->query().size();
return my_db.lastError().isValid();
}
The query here is a select query, but I still get -1;
If I use model->rowCount() I get only ones that got displayed, e.g 256, but select count(*) returns 120k results.
What's wrong about it?
This row count code extract works for SQLite3 based tables as well as handles the "fetchMore" issue associated with certain SQLite versions.
QSqlQuery query( m_database );
query.prepare( QString( "SELECT * FROM MyDatabaseTable WHERE SampleNumber = ?;"));
query.addBindValue( _sample_number );
bool table_ok = query.exec();
if ( !table_ok )
{
DATABASETHREAD_REPORT_ERROR( "Error from MyDataBaseTable", query.lastError() );
}
else
{
// only way to get a row count, size function does not work for SQLite3
query.last();
int row_count = query.at() + 1;
qDebug() << "getNoteCounts = " << row_count;
}
The documentation says:
Returns ... -1 if the size cannot be determined or if the database does not support reporting information about query sizes.
SQLite indeed does not support this.
Please note that caching 120k records is not very efficient (nobody will look at all those); you should somehow filter them to get the result down to a manageable size.

Define dictionary in protocol buffer

I'm new to both protocol buffers and C++, so this may be a basic question, but I haven't had any luck finding answers. Basically, I want the functionality of a dictionary defined in my .proto file like an enum. I'm using the protocol buffer to send data, and I want to define units and their respective names. An enum would allow me to define the units, but I don't know how to map the human-readable strings to that.
As an example of what I mean, the .proto file might look something like:
message DataPack {
// obviously not valid, but something like this
dict UnitType {
KmPerHour = "km/h";
MiPerHour = "mph";
}
required int id = 1;
repeated DataPoint pt = 2;
message DataPoint {
required int id = 1;
required int value = 2;
optional UnitType theunit = 3;
}
}
and then have something like to create / handle messages:
// construct
DataPack pack;
pack->set_id(123);
DataPack::DataPoint pt = pack.add_point();
pt->set_id(456);
pt->set_value(789);
pt->set_unit(DataPack::UnitType::KmPerHour);
// read values
DataPack::UnitType theunit = pt.unit();
cout << theunit.name << endl; // print "km/h"
I could just define an enum with the unit names and write a function to map them to strings on the receiving end, but it would make more sense to have them defined in the same spot, and that solution seems too complicated (at least, for someone who has lately been spoiled by the conveniences of Python). Is there an easier way to accomplish this?
You could use custom options to associate a string with each enum member:
https://developers.google.com/protocol-buffers/docs/proto#options
It would look like this in the .proto:
extend google.protobuf.FieldOptions {
optional string name = 12345;
}
enum UnitType {
KmPerHour = 1 [(name) = "km/h"];
MiPerHour = 2 [(name) = "mph"];
}
Beware, though, that some third-party protobuf libraries don't understand these options.
In proto3, it's:
extend google.protobuf.EnumValueOptions {
string name = 12345;
}
enum UnitType {
KM_PER_HOUR = 0 [(name) = "km/h"];
MI_PER_HOUR = 1 [(name) = "mph"];
}
and to access it in Java:
UnitType.KM_PER_HOUR.getValueDescriptor().getOptions().getExtension(MyOuterClass.name);

Resources