I've been researching for a while now on creating a point cloud from laser scans, but I'm running into a few issues:
First of all, PCL doesn't have io support for hokuyo lasers, so I'm planning on using the hokuyoaist library for that.
The main problem I have is how to convert from 2D laser data to a point cloud (pointcloud2) so I can work with the PCL library. I am aware of some packages in ROS that do this, but I really don't want to get near to ROS doing this.
Thanks in advance,
Marwan
You could use something like (untested, but should get you going):
#include <pcl/point_types.h>
#include <pcl/point_cloud.h>
#include <vector>
// Get Hokuyo data
int numberOfDataPoints = 0; // you need to fill this
std::vector<double> hokuyoDataX,hokuyoDataY;
for(int i=0;i<numberOfDataPoints;i++)
{
hokuyoDataX.push_back(...); // you need to fill this for the x of point i
hokuyoDataY.push_back(...); // you need to fill this for the y of point i
}
// Define new cloud object
pcl::PointCloud<pcl::PointXY>::Ptr cloud (new pcl::PointCloud<pcl::PointXY>);
cloud->is_dense = true; // no NaN and INF expected.
cloud->width = hokuyoDataX.size();
cloud->height = 1;
cloud->points.resize(hokuyoDataX.size());
// Now fill the pointcloud
for(int i=0; i<hokuyoDataX.size(); i++)
{
cloud->points[i].x = hokuyoDataX[i];
cloud->points[i].y = hokuyoDataY[i];
}
Related
I'm using FFTW to compute a 2D complex to complex FFT using this code:
#include <stdlib.h>
#include "defines.h"
#include <math.h>
#include <fftw3.h>
int main(void)
{
fftw_complex *in,*out;
fftw_plan plan;
int rows=64;
int cols=64;
int i;
in = (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*rows*cols);
out = (fftw_complex*)fftw_malloc(sizeof(fftw_complex)*rows*cols);
for (i=0; i<rows*cols; i++)
{
in[i][0] = input_data[2*i];
in[i][1] = input_data[2*i+1];;
}
printf("### Setting plan ###\n");
plan = fftw_plan_dft_2d(rows, cols, in, out, FFTW_FORWARD, FFTW_ESTIMATE);
printf("### Executing plan ###\n");
fftw_execute(plan);
for ( i = 0; i <rows*cols; i++ )
{
printf ( "RE = %f \t IM = %f\n",in[i][0], in[i][1] );
}
fftw_destroy_plan(plan);
fftw_free(in);
fftw_free(out);
return 0;
}
Now, I changed the FFTW flag from ESTIMATE to EXHAUSTIVE in order to allow the planner to choose the optimal algorithm for this 2D FFT but I got an all-zeros result. Can someone tell me what is wrong?
Using the flag FFTW_ESTIMATE, the function fftw_plan_dft_2d() tries to guess which FFT algorithm is the fastest without running any of them. Using the flag FFTW_EXHAUSTIVE, that function runs every possible algorithm and select the fastest one.
The problem is that the input is overwritten in the process.
The solution is to populate the input array after the plan creation!
See documentation of planner flags:
Important: the planner overwrites the input array during planning unless a saved plan (see Wisdom) is available for that problem, so you should initialize your input data after creating the plan. The only exceptions to this are the FFTW_ESTIMATE and FFTW_WISDOM_ONLY flags, as mentioned below.
I've been trying for a while to get support for softbodies in my project,
I have already added all primitives, including static triangle meshes as you can see below:
I've now been trying to implement the softbodies.
I do have triangle shapes as I mentioned, and I thought I could re-use the triangulation code to
create softbody objects with the function:
btSoftBody* psb = btSoftBodyHelpers::CreateFromTriMesh(.....);
I successfully did this with the bunny mesh that's hardcoded, but now I want to insert any trinangulated mesh into this function.
But I'm a bit lost figuring out exactly what parameters to send in (how to get the right parameters from my triangulated mesh).
Do anyone of you have a example of this? (not a hardcoded one, but from a
btTriangleMesh *mTriMesh = new btTriangleMesh();
type object? )
It does work with the predefined type shapes that bullet has, so my update loop and all that works fine.
This is for version 2.81 (assuming vertices are stored as PHY_FLOAT and indices as PHY_INTEGER):
btTriangleMesh *mTriMesh = new btTriangleMesh();
// ...
const btVector3 meshScaling = mTriMesh->getScaling();
btAlignedObjectArray<btScalar> vertices;
btAlignedObjectArray<int> triangles;
for (int part=0;part< mTriMesh->getNumSubParts(); part++)
{
const unsigned char * vertexbase;
const unsigned char * indexbase;
int indexstride;
int stride,numverts,numtriangles;
PHY_ScalarType type, gfxindextype;
mTriMesh->getLockedReadOnlyVertexIndexBase(&vertexbase,numverts,type,stride,&indexbase,indexstride,numtriangles,gfxindextype,part);
for (int gfxindex=0; gfxindex < numverts; gfxindex++)
{
float* graphicsbase = (float*)(vertexbase+gfxindex*stride);
vertices.push_back(graphicsbase[0]*meshScaling.getX());
vertices.push_back(graphicsbase[1]*meshScaling.getY());
vertices.push_back(graphicsbase[2]*meshScaling.getZ());
}
for (int gfxindex=0;gfxindex < numtriangles; gfxindex++)
{
unsigned int* tri_indices= (unsigned int*)(indexbase+gfxindex*indexstride);
triangles.push_back(tri_indices[0]);
triangles.push_back(tri_indices[1]);
triangles.push_back(tri_indices[2]);
}
}
btSoftBodyWorldInfo worldInfo;
// Setup worldInfo...
// ....
btSoftBodyHelper::CreateFromTriMesh(worldInfo, &vertices[0], &triangles[0], triangles.size()/3 /*, randomizeConstraints = true*/);
A slower, more general approach is to iterate the mesh using mTriMesh->InternalProcessAllTriangles() but that will make your mesh a soup.
I have a solution which includes three projects. one is creating static library i.e .lib file. It contains one header file main.h and one main.cpp file. cpp file contains the definition of functions of header file.
second project is .exe project which includes the header file main.h and calls a function of header file.
third project is also a .exe project which includes the header file and uses a variable flag of header file.
Now both .exe projects are creating different instance of the variable. But I want to share same instance of the variable between the projects dynamically. as I have to map the value generated by one project into other project at the same instant.
Please help me as I am nearing my project deadline.
Thanks for the help.
Here are some part of the code.
main.cpp and main.h are files of .lib project
main.h
extern int flag;
extern int detect1(void);
main.cpp
#include<stdio.h>
#include"main.h"
#include <Windows.h>
#include <ShellAPI.h>
using namespace std;
using namespace cv;
int flag=0;
int detect1(void)
{
int Cx=0,Cy=0,Kx=20,Ky=20,Sx=0,Sy=0,j=0;
//create the cascade classifier object used for the face detection
CascadeClassifier face_cascade;
//use the haarcascade_frontalface_alt.xml library
face_cascade.load("E:\\haarcascade_frontalface_alt.xml");
//System::DateTime now = System::DateTime::Now;
//cout << now.Hour;
//WinExec("E:\\FallingBlock\\FallingBlock\\FallingBlock\\bin\\x86\\Debug\\FallingBlock.exe",SW_SHOW);
//setup video capture device and link it to the first capture device
VideoCapture captureDevice;
captureDevice.open(0);
//setup image files used in the capture process
Mat captureFrame;
Mat grayscaleFrame;
//create a window to present the results
namedWindow("capture", 1);
//create a loop to capture and find faces
while(true)
{
//capture a new image frame
captureDevice>>captureFrame;
//convert captured image to gray scale and equalize
cvtColor(captureFrame, grayscaleFrame, CV_BGR2GRAY);
equalizeHist(grayscaleFrame, grayscaleFrame);
//create a vector array to store the face found
std::vector<Rect> faces;
//find faces and store them in the vector array
face_cascade.detectMultiScale(grayscaleFrame, faces, 1.1, 3, CV_HAAR_FIND_BIGGEST_OBJECT|CV_HAAR_SCALE_IMAGE, Size(30,30));
//draw a rectangle for all found faces in the vector array on the original image
for(unsigned int i = 0; i < faces.size(); i++)
{
Point pt1(faces[i].x + faces[i].width, faces[i].y + faces[i].height);
Point pt2(faces[i].x, faces[i].y);
rectangle(captureFrame, pt1, pt2, cvScalar(0, 255, 0, 0), 1, 8, 0);
if(faces.size()>=1)
j++;
Cx = faces[i].x + (faces[i].width / 2);
Cy = faces[i].y + (faces[i].height / 2);
if(j==1)
{
Sx=Cx;
Sy=Cy;
flag=0;
}
}
if(Cx-Sx > Kx)
{
flag = 1;
printf("%d",flag);
}
else
{
if(Cx-Sx < -Kx)
{
flag = 2;
printf("%d",flag);
//update(2);
}
else
{
if(Cy-Sy > Ky)
{
flag = 3;
printf("%d",flag);
//update(3);
}
else
{
if(Cy-Sy < -Ky)
{
flag = 4;
printf("%d",flag);
//update(4);
}
else
if(abs(Cx-Sx) < Kx && abs(Cy-Sy)<Ky)
{
flag = 0;
printf("%d",flag);
//update(0);
}
}
}
}
2nd project's code
face.cpp
#include"main.h"
#include<stdio.h>
int main()
{
detect1();
}
3rd project's code
tetris.cpp
#include"main.h"
int key;
key = flag;
if(key==0)
{
MessageBox(hwnd,"Space2","TetRiX",0);
}
if(key==4)
{
tetris.handleInput(1);
tetris.drawScreen(2);
//MessageBox(hwnd,"Space2","TetRiX",0);
}
You need to look up how to do inter-process communication in the operating system under which your applications will run. (At this point I assume that the processes are running on the same computer.) It looks like you're using Windows (based on seeing a call to "MessageBox") so the simplest means would be for both processes to use RegisterWindowMessage create a commonly-understood message value, and then send the data via LPARAM using either PostMessage or SendMessage. (You'll need each of them to get the window handle of the other, which is reasonably easy.) You'll want to have some sort of exclusion mechanism (mutex or critical section) in both processes to ensure that the shared value can't be read and written at the same time. If both processes can do the "change and exchange" then you'll have an interesting problem to solve if both try to do that at the same time, because you'll have to deal with the possibility of deadlocks over that shared value.
You can also use shared memory, but that's a bit more involved.
If the processes are on different computers you'll need to do it via TCP/IP or a protocol on top of TCP/IP. You could use a pub-sub arrangement--or any number of things. Without an understanding of exactly what you're trying to accomplish, it's difficult to know what to recommend.
(For the record, there is almost no way in a multi-process/multi-threaded O/S to share something "at the same instant." You can get arbitrarily close, but computers don't work like that.)
Given the level of difficulty involved, is there some other design that might make this cleaner? Why do these processes have to exchange information this way? Must it be done using separate processes?
I have a desktop Qt-based application that fetches a sound stream from the network and plays it using QAudioOutput. I want to provide a volume control to the user so that he can reduce the volume. My code looks like this:
float volume_control = get_user_pref(); // user provided volume level {0.0,1.0}
for (;;) {
AVPacket *retrieved_pkt = get_decoded_packet_stream(); // from network stream
AVPacket *work_pkt
= change_volume(retrieved_pkt, volume_control); // this is what I need
// remaining code to play the work_pkt ...
}
How do I implement change_volume() or is there any off the shelf function that I can use?
Edit: Adding codec-related info as requested in the comments
QAudioFormat format;
format.setFrequency(44100);
format.setChannels(2);
format.setSampleSize(16);
format.setCodec("audio/pcm");
format.setByteOrder(QAudioFormat::LittleEndian);
format.setSampleType(QAudioFormat::SignedInt);
The following code works just fine.
// audio_buffer is a byte array of size data_size
// volume_level is a float between 0 (silent) and 1 (original volume)
int16_t * pcm_data = (int16_t*)(audio_buffer);
int32_t pcmval;
for (int ii = 0; ii < (data_size / 2); ii++) { // 16 bit, hence divided by 2
pcmval = pcm_data[ii] * volume_level ;
pcm_data[ii] = pcmval;
}
Edit: I think there is a significant scope of optimization here, since my solution is compute-intensive. I guess avcodec_decode_audio() can be used to speed it up.
I've found very similiar topic: how to convert an opencv cv::Mat to qimage , but it does not solve my problem.
I have function converting cv::Mat to QImage
QImage cvMatToQImg(cv::Mat& mat)
{
cv::Mat rgb;
if(mat.channels()==1)
{
cv::cvtColor(mat,rgb,CV_GRAY2BGR);
cv::cvtColor(rgb,rgb,CV_BGR2BGRA);
QImage temp = QImage((unsigned char*)(rgb.data), rgb.cols,
rgb.rows,QImage::Format_ARGB32 );
QImage returnImage = temp.copy();
return returnImage;
}
And it's works for my but I want to make it more efficient.
First: why changing 2 cvtColor functions with:
cv::cvtColor(mat,rgb,CV_GRAY2BGRA)
fails on
QImage returnImage = temp.copy()
with segfault.
Then how to eliminate copying of QImage. When I simply return temp image, I'm getting segfault.
Any other optimalizations can be done there? It's very often used function so I want to make it as fast as possible.
Your solution to the problem is not efficient, in particular it is less efficient then the code I posted on the other question you link to.
Your problem is that you have to convert from grayscale to color, or RGBA. As soon as you need this conversation, naturally a copy of the data is needed.
My solution does the conversion between grayscale and color, as well as between cv::Mat and QImage at the same time. That's why it is the most efficient you can get.
In your solution, you first try to convert and then want to build QImage around OpenCV data directly to spare a second copy. But, the data you point to is temporary. As soon as you leave the function, the cv::Mat free's its associated memory and that's why it is not valid anymore also within the QImage. You could manually increase the reference counter of the cv::Mat beforehand, but that opens the door for a memory leak afterwards.
In the end, you attempt a dirty solution to a problem better solved in a clean fashion.
It may be easiest to roll your own solution. Below is the current OpenCV implementation for going from gray to RGBA format:
template<typename _Tp>
struct Gray2RGB
{
typedef _Tp channel_type;
Gray2RGB(int _dstcn) : dstcn(_dstcn) {}
void operator()(const _Tp* src, _Tp* dst, int n) const
{
if( dstcn == 3 )
for( int i = 0; i < n; i++, dst += 3 )
{
dst[0] = dst[1] = dst[2] = src[i];
}
else
{
_Tp alpha = ColorChannel<_Tp>::max();
for( int i = 0; i < n; i++, dst += 4 )
{
dst[0] = dst[1] = dst[2] = src[i];
dst[3] = alpha;
}
}
}
int dstcn;
};
Here is where the actual cvtColor call occurs:
case CV_GRAY2BGR: case CV_GRAY2BGRA:
if( dcn <= 0 ) dcn = 3;
CV_Assert( scn == 1 && (dcn == 3 || dcn == 4));
_dst.create(sz, CV_MAKETYPE(depth, dcn));
dst = _dst.getMat();
if( depth == CV_8U )
CvtColorLoop(src, dst, Gray2RGB<uchar>(dcn));
This code is contained in the color.cpp file in the imgproc library.
As you can see, since you are not setting the dstCn parameter in your cvtColor calls, it defaults to dcn = 3. To go straight from gray to BGRA, set dstCn to 4. Since OpenCV's default color order is BGR, you'll still need to swap the color channels for it to look right (assuming you get your image data from an OpenCV function). So, it may be worth it to implement your own converter possibly following the above example, or using ypnos answer here.
Also, have a look at my other answer involving how to integrate OpenCV with Qt.
The problem is that both the cv::Mat and QImage data isn't necessarily contiguous.
New data rows in opencv start on a 32bit boundary (not sure about QImage - I think it's system dependant) so you can't copy a memeory block unless your rows happen to be exact multiples of 4bytes
See How to output this 24 bit image in Qt