using connector in webots - connector

I want to simulate an environment which a robot grab a box and transport it.
robot must grab box from any point it want but it's not possible with a grabber so I thought maybe I can use a connector on robot and another connector on box and set it's length and width large enough and the angles of connecting to 180 degree to let robot grab it any way it want. I define box as robot node to put connector on it but the robot don't connect to box when it is close enough.
I thought maybe something like this is not possible, is it? Can I use a connector to take a box by a robot?
thanks.

As I understand, your want the robot to be able to grab the box from any of the 4 sides it will approach the box. If so, I would recommend you to set a Connecter device on each side of the box and another Connector device on the robot. All connectors should have a large range of distance and angle to allow connection. This way, your robot should be able to grab the box from any side.

Related

What device/instrument/technology should I use for detecting object’s lying on a given surface?

First of: Thanks for taking the time to help me with my problem. It is much appreciated :)
I am building a natural user interface. I’d like the interface to detect several (up to 40) objects lying on it. The interface should detect if the objects are moved on it’s the canvas. It is not important what the actual object on surface is
e.x. “bottle”
or what color it has – only the shape and the placement of the object is of interest
e.x. “circle” .
So far I’m using a webcam connected to my computer and Processing’s blob functionality to detect the objects on the surface of the interface (see picture 1). This has some major disadvantages to what I am trying to accomplish:
I do not want the user to see the camera or any alternative device because this is detracting the user’s attention. Actually the surface should be completely dark.
Whenever I am reaching with my hand to rearrange the objects on the interface, the blob detection gets very busy and is recognizing objects (my hand) which are not touching the canvas directly. This problem can hardly be tackled using a Kinect, because the depth functionality is not working through glass/acrylic glass – correct me if I am wrong.
It would be nice to install a few LEDs on the canvas controlled by an Arduino. Unfortunately, the light of the LEDs would disturb the blob detection.
Because of the camera’s focal length, the table needs to be unnecessarily high (60 cm / 23 inch).
Do you have any idea on an alternative device/technology to detect the objects? Would be nice if the device would work well with Processing and Arduino.
Thanks in advance! :)
Possibilities:
Use Reflective tinted glass so that the surface would dark or reflective
Illuminate the area, where you place the webcam with array of IR LED's.
I would suggest colour based detection and contouring of the objects.
If you are using colour based detection convert frames to HSV and CrCb colour space. These are much better for segmentation of required area while using colour based detection.
I do recommend you to check out https://github.com/atduskgreg/opencv-processing. This interfaces Open-CV with processing, you will be getting lot functionalities of Open-CV in processing .
One possibility:
Use a webcam with infrared capability (such as a security camera with built-in IR illumination). Apparently some normal webcams can be converted to IR use by removing a filter, I have no idea how common that is.
Make the tabletop out of some material that is IR-transparent, but opaque or nearly so to visible light. (Look at the lens on most any IR remote control for an example.)
This doesn't help much with #2, unfortunately. Perhaps you can be a bit pickier about the size/shape of the blobs you recognize as being your objects?
If you only need a few distinct points of illumination for #3, you could put laser diodes under the table, out of the path of the camera - that should make a visible spot on top, if the tabletop material isn't completely opaque. If you need arbitrary positioning of the lights - perhaps a projector on the ceiling, pointing down?
Look into OpenCV. It's an open source computer vision project.
In addition to existing ideas (which are great), I'd like to suggest trying TUIO Processing.
Once you have the camera setup (with the right field of view/lens/etc. based on your physical constraints) you could probably get away with sticking TUIO markers to the bottom of your objects.
The software will pickup detect the markers and you'll differentiate the objects by ID, but also be able to get position/rotation/etc. and your hands will not be part of that.

Kicad re-arrange schema for one layer PCB

I created a simple schema on kicad (this one )
. Then I associated the corresponding footprint, generated the netlist file, then run pcbnew to create the PCB. But then I ended up with this mess and I don't know how to deal with it (I did draw the edge of the PCB ).
I am trying to draw a PCB that I can build myself with no crazy tool (so I guess single layer), but on this draw many line intersect with other line so not really what I was looking for a single layer PCB. Not sure it will help but all my kicad file are there kicad_file
The lines that you see are the connecting lines. In Diptrace it is called "ratlines". They obviously indicate where the connection is based on your schematic. Now on Diptrace there is an "AutoRoute" tool which will create routes (traces) based on the ratlines and it will try to do so a well as possible. I usually do the AUtoroute and if I am not happy with the result I do manual routing.
So I suggest you look for such a tool in your application. If it does not exist then don't be shy to create them yourself. Your PCB is not complicated so I am sure you will manage.
One thing to keep in mind though; If you find that traces will still intersect then there are two things you can do:
Use the space between the pins of the IC
Place pads where you can solder jumper wires.
Look at https://www.wayneandlayne.com/blog/2013/02/27/kicad-tutorial-using-the-autorouter/ for a post on autorouting

Pause on one of branches in DirectShow graph

I wrote an app for video capture. That app uses the following graph:
As you see after Smart Tee it has two banches. The first, "Capture", I use for stream handling, the second, "Preview", for show of video on the app's window.
Sometimes an user minimizes that window and using Preview branch is not need. For that case I would like to stop a stream only on this branch.
I can do it by stop and rebuild of all graph without Preview. But I would like don't to stop/rebuild the graph. Perhaps, does somebody knows the other method to do it? Any ideas.
It's not possible to just stop a part of a graph. For such a scenario you need multiple graphs (source, grabber, preview) and the GMFBridge.
By the way, why do you need the tee? Can't you connect the video-renderer to the grabber?

Sensors and webpages

I wonder if you wonderful people can point me in the right direction??
I'm quite new to web programming, I'm OK at conventional C/C++ etc but just really getting into the whole webside thang. Anyway, I was wondering how I may solve the following.
I want to have a sensor (doesn't matter what type it is, let's say a temperature sensor). This sensor will read it's environment reporting on (in this case) temperature. (Let's say the circuitry has been built and it's giving the data I need - via a desktop app)
My question is
List item I want to convert the sensor output to a 3D graph (any ideas guys),
I want to be able to show the graph on PC/smartphones (so think a web interface the best approach - unless someone/anyone has better ideas and most importantly links)
List item I want to show the temperature change/graph in 'real time' (absolutely crucial) through the web interface .
I think, the conversion of the data would have to be done server-side. I'm thinking (and I may be wrong so please do correct)
List item Client side - Get sensor data (via the desktop app)
List item Client side - Transmit data (via the desktop app - not sure how this would be done yet)
List item Server side - Convert data to graph
List item Server side - Transmit data/graph
List item Client side - Receive data
List item Client side - Show graph
If this were just a question of rendering a sensors data to a PC screen (self contained application) - I would feel confident to tackle it. However, I think where I'm getting stuck is the rendering/transmitting and displaying of the 3D images in real time using web technology, platforms and languages. If it helps, I am just picking up Php, MySql and python. I already know C/C++/VB and assembly.
I hope this makes sense, any starting points that you can give will be greatly appreciated
Jason :)
You could use MRTG http://en.wikipedia.org/wiki/Multi_Router_Traffic_Grapher
It provides real time and png graphics of sensor data.
It is very easy to configure and shows/records tipical 1 year of data in a RRD http://en.wikipedia.org/wiki/Round-Robin_Database

Dial a number and play a voice file instead of microphone input

1-we are trying to write an application which dial a number and play a voice file instead of microphone input. Is it possible in Maemo (N900)?
we can not find any ""Answering Machine " like program in N900. is this means that there is no way to play a voice file instead of Microphone input?
There is a way. Play that voice file and make pulseaudio believe it's a proper input, and disable the microphone input. For more information see my question:
How to redirect from Audio Output to Mic Input using PulseAudio?
It is possible but you need a good pulseaudio knowledge to do it, I can already set it easily on my PC using pavucontrol. Drop me a message (or better, answer my question) if you bite the bullet and decide to learn how to use pactl/pacmd.

Resources