Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'd like to know if it's possible to somehow triangulate (or otherwise) get a location of a moving object inside a defined area, let's say, 200m x 200m, by using radio waves.
I've been looking at some transceivers and the range shouldn't be impossible (budget doesn't really matter). What would i need? Is there some reading material out there about this?
What i thought about was having a few "Antennas" positioned around the defined area, listening for the RF signal from the moving object - and then somehow calculate the distance from the object to the antenna, and that way get the exact location of the object by combining the data from all antennas.
Is this somehow possible, anyone care to point me in the right direction?
Thanks a lot guys.
Edit: Forgot to mention that the accuracy wouldn't have to be so precise, maybe ~15cm?
Phased antenna arrays are used for beamforming: sending a signal in a certain direction and estimating direction of arrival.
DOA and a several antenna arrays could be used for Localization, which is what you are looking for. This source explains that 2D localization can be performed with 3 receivers using only the TDOA information.
I'm not sure if its practical or applicable to the problem you want to solve, just an avenue for investigation.
well, similar question was posted on this link, you can definitely give it a shot... https://electronics.stackexchange.com/questions/8690/signal-triangulation
Personally I think, if your target area is really like within 200m X 200m you can take a look at RFID based solutions. Passive RFID systems utilize something called a Received Signal Strength Indicator (RSSI) to determine how close an object is to an RFID reader. RSSI can't tell you the exact range, but you can surely find out if it is getting near or far. I have seen RFID systems being used to identify the loading of trucks in a given area roughly the same size as your requirement.
The only caution is if you are using multiple tags on an object for the directivity of target then RFID wont be so accurate as the RSSI level from different tags wont give a conclusive result.
Phased array system is highly accurate, but its a tad costly to implement.
You can find some reference documents in this article. It has good collection of RF ranging and Direction Finding manuals.
There's quite a lot of academic papers out there on this, e.g. scholar search and products, e.g. ekahau. The simplest to have a go with is probably trilateration with hardware that reports an RSSI, which you use to infer distance. Time difference of signal arrival is another level of accurate and tricky.
A lot of these techniques are quite sensitive to the environment: 15cm accuracy in open space, with sufficient receivers, is doable. If you add walls, furniture and people it gets harder. Then you need to survey the site for what a beacon in that location looks like; add in the variation depending on where a user with the device is (big bags of water block radio); and then interpolate between places.
You have an arduino tag on your question. I'm not sure what the significance of this is, but as above - do check what data you can get from your hardware.
I am not not sure about RF & Antennas, BUT, having multiple cameras ( whose relative position is known ) looking at the same object this can be achieved using structure from motion
Related
I have some questions.
The first question is which equipment should be used to recognize QR Code.
I'm thinking of two things.
The first is the QR code Scanner used in the industrial field.
The second is the camera module. (opencv will be used)
However, the situation to consider is that it should be recognized at the speed of 50cm/s.
What do you think about?
And if I use a camera, is there a library that you can recommend to recognize QR Code? (C/C++ only)
Always start with the simplest solution and then go more complex if needed. If you're using ROS/OpenCV, OpenCV has a QR Code scanner, ex. Other options include ZBar, quirc, and more, found by searching github or the internet.
As for a camera, if you don't need the intrinsic matrix, then you only need to decide on the resolution: more resolution takes (non-linearly) longer to compute, but less resolution prohibits seeing the objects well.
Your comment about "recognize at 50cm/s" doesn't make much sense. I assume you mean that you want to be able to decode a QR code that's up-to 50 cm away, and do it in less than a second (to have time to stop). First you'll have to check if the algorithm, running on your hardware, can detect the QR code at different desired distances, and how that changes with scaling the image up/down in OpenCV. Then you'll have to time how long it takes to detect/decode it at those distances/resolutions/scales. If it fails to be good enough, you can try another algorithm, try different compilation settings, perhaps give it it's own thread, change the scaling on the image, accept the limitations, or change the hardware.
I'm having a hard time understanding the way flip-flops actually flip states and wondering why is it such a design commonly used, when simpler design could suffice, from my current opinion.
I'm hoping that after showing you my version of a latch diagram, someone could point out the flaws and that may help me understand why a flip-flop latch is better.
I was reading a book and bumped into some "general" form of latch:
https://i.imgur.com/nkldf4u.png (sorry, I don't have the reputation insert images)
I've been on it for about 2 hours trying to truly grasp the mechanism. Seeing that I can't do it, I've draw my version of a latch:
https://i.imgur.com/fFgpNzR.png
The blue diagram, the one from the book, is harder to follow because some gates will switch 2 times when the inputs switches once, because as the output is tunneled back as input to the same gate, the output may change base on its previous value.
My version of the diagram, the one in black, uses a more programmable approach. I take the current state C and decide if it differs from the input state and output it into A. I use A value in an AND gate with the enable wire to decide if both criteria is met and put it in B. Finally, I'm using a XOR to change the state and output as C.
I'm hoping someone can tell me why is this bad, what I haven't taken into consideration or why a more complex mechanism is needed.
Thank you in anticipation.
As far as I can tell, your latch implementation should work.
However, there is more to low-level digital design than just gate count. In actual circuits, not all gates are created equal as the actual implementation of these gates can make some more "costly" than others (usually measured in area/transistor count and complexity in routing). For typical CMOS implementations, NAND gates are really cheap (only 4 transistors for two input NAND) so alot of primitives use NAND (or NOR) as a building block for more complex designs. XOR is generally a more complicated gate to implement, most CMOS implementations Ive seen use 8 transistors. Without going through and optimizing your design, it might take at least 20 or more transistors to implement while the latch design from the textbook only takes 16 (A 20%+ savings in area per bit stored, which is quite significant). There is alot more at play here than just transistor count as well; things like transistor sizing, routing and trace sizing, power considerations and glitch protection when actually going through and implementing designs, so even this simple analysis is incomplete and might be missing reasons for the textbook implementation vs yours (or vice versa).
Asynchronous sequential logic (which is what latch/flipflop implementations are) can be difficult to understand which is why most circuits use higher-level constructs and treat these details as black boxes (and it also creates a nice abstraction where the actual implementation doesnt matter so long as the properties of that element are preserved).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am building a very exciting project, and I am creating this post to get new, fresh and crazier ideas.
I have a big wall, and I am shooting at it. I need to calculate the exact X and Y coordinates where the projectile impacts.
There are several challenges:
Not all bullets have the same mass or speed.
I may be using more than one gun, so two bullets may hit at a closer interval.
I may play a video on the wall, so a kinnect reading wholes may get confused.
If possible, I don't want to add any enhancements to the guns or bullets.
I may not have full control over the type of wall; regular bricks and paint is preferred.
With all that said, I am fully opened to options. The former are not constrains and I am willing to change the approach and start from scratch.
My approach so far is setting up three motion sensors and triangulating the position based on the impact wave. So far it have been inaccurate and needing lots of tune-up.
If you want to improve my approach suggesting a sensor or software that I may be missing, please be welcome, but as I said, I am willing to start it over.
This project will be completed, so if your solution is great, you will contribute to something very cool, and I can send you a picture of it or even invite you to shoot with me ;-)
Use a high-speed IR camera and use it to detect flares on a color-flattened image (Use OpenCV or other library convert 24 bit RGB etc -> black & white image).
Take the FOV of the camera and calculate the X & Y offset by triangulation based on the distance to the wall and the translation from the camera image's centre-point. If you need a matrix library use the GLM library - it's fast and will contain all the functions you need.
Good Luck!
This is just an idea. It might sound a little complicated but it might work.
Pretty much, what you have to do is an laser array with a sensor at the end. Here is what I mean.
Now let's say that you have way more lasers and sensors.
After that, you can use a photocell / photoresistor / light dependant resistor to see when and which laser beem is broken. After that, by having 1 laser on the X axis and 1 and the Y axis, you can pinpoint the bullet.
Now this gets complicated if there is many sensors. A short trick that I have is to directly wire the photocell to a an analog to digital converter and in the end, connect it to an array of shift registers (aka IO expander) of the Arduino. Thus, we can know which sensor got triggered.
This method does respect many of your constraints. It can detect a big or a small bullet, no matter the speed (though a faster Arduino could help). It can detect things even if there is a video on the wall. If calibrated proprely, the laser light will pretty much blind the photocell and if the laser beam is cut, even if slightly, the light intensity will lower quite a bit, indicating that a bullet passed at that point. No bullet / gun mod needed. If you mount this on a rack type of a "mobile" contruction, this can be used on many types of wall and you only need to realign both axes before using it again.
This might sound complicated, but this is just an idea / suggestion. If anybody has any suggestions for the analog to IO thing, please comment it.
Have you considered a thermal camera? I saw this video a while back, where a guy shoots at a target and captures his shots through a thermal camera. At minute 1:00, once the bullets hit the target, a heat spot appears for a brief period of time.
The way I would go about it is to place the camera at the closest possible distance from the wall and get an initial shot of the target area. Then every bullet fired will cause a short irregularity of heat, on the wall, which will be the point of impact.
This is sort of already being done using acoustic sensors.
http://www.shotspotter.com/
It has a few patents as well
https://www.google.ch/patents/US5551876
and if you are really bored
http://www.scientific.net/AMM.239-240.735
http://russianpatents.com/patent/247/2470252.html
https://www.google.ch/patents/US4303853
https://www.scientificamerican.com/article/acoustic-sensor-drone-surveillance-war/
http://www.isis.vanderbilt.edu/sites/default/files/ipsn07-sallai.pdf
http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=1026044
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.622.124&rep=rep1&type=pdf
And maybe a read of this book
https://www.amazon.co.uk/Battlefield-Acoustics-Thyagaraju-Damarla/dp/3319160354
Imagine you have a channel of communication that is inherently lossy and one-way. That is, there is some inherent noise that is impossible to remove that causes, say, random bits to be toggled. Also imagine that it is one way - you cannot request retransmission.
But you need to send data over it regardless. What techniques can you use to send numbers and text over that channel?
Is it possible to encode numbers so that even with random bit twiddling they can still be interpreted as values close to the original (lossy transmittion)?
Is there a way to send a string of characters (ASCII, say) in a lossless fashion?
This is just for fun. I know you can use morse code or any very low frequency binary communication. I know about parity bits and checksums to detect errors and retrying. I know that you might as well use an analog signal. I'm just curious if there are any interesting computer-sciency techniques to send this stuff over a lossy channel.
Depending on some details that you don't supply about your lossy channel, I would recommend, first using a Gray code to ensure that single-bit errors result in small differences (to cover your desire for loss mitigation in lossy transmission), and then possibly also encoding the resulting stream with some "lossless" (==tries to be loss-less;-) encoding.
Reed-Solomon and variants thereof are particularly good if your noise episodes are prone to occur in small bursts (several bit mistakes within, say, a single byte), which should interoperate well with Gray coding (since multi-bit mistakes are the killers for the "loss mitigation" aspect of Gray, designed to degrade gracefully for single-bit errors on the wire). That's because R-S is intrinsically a block scheme, and multiple errors within one block are basically the same as a single error in it, from R-S's point of view;-).
R-S is particularly awesome if many of the errors are erasures -- to put it simply, an erasure is a symbol that has most probably been mangled in transmission, BUT for which you DO know the crucial fact that it HAS been mangled. The physical layer, depending on how it's designed, can often have hints about that fact, and if there's a way for it to inform the higher layers, that can be of crucial help. Let me explain erasures a bit...:
Say for a simplified example that a 0 is sent as a level of -1 volt and a 1 is send as a level of +1 volt (wrt some reference wave), but there's noise (physical noise can often be well-modeled, ask any competent communication engineer;-); depending on the noise model the decoding might be that anything -0.7 V and down is considered a 0 bit, anything +0.7 V and up is considered a 1 bit, anything in-between is considered an erasure, i.e., the higher layer is told that the bit in question was probably mangled in transmission and should therefore be disregarded. (I sometimes give this as one example of my thesis that sometimes abstractions SHOULD "leak" -- in a controlled and architected way: the Martelli corollary to Spolsky's Law of Leaky Abstractions!-).
A R-S code with any given redundancy ratio can be about twice as effective at correcting erasures (errors the decoder is told about) as it can be at correcting otherwise-unknown errors -- it's also possible to mix both aspects, correcting both some erasures AND some otherwise-unknown errors.
As the cherry on top, custom R-S codes can be (reasonably easily) designed and tailored to reduce the probability of uncorrected errors to below any required threshold θ given a precise model of the physical channel's characteristics in terms of both erasures and undetected errors (including both probability and burstiness).
I wouldn't call this whole area a "computer-sciency" one, actually: back when I graduated (MSEE, 30 years ago), I was mostly trying to avoid "CS" stuff in favor of chip design, system design, advanced radio systems, &c -- yet I was taught this stuff (well, the subset that was already within the realm of practical engineering use;-) pretty well.
And, just to confirm that things haven't changed all that much in one generation: my daughter just got her MS in telecom engineering (strictly focusing on advanced radio systems) -- she can't design just about any serious program, algorithm, or data structure (though she did just fine in the mandatory courses on C and Java, there was absolutely no CS depth in those courses, nor elsewhere in her curriculum -- her daily working language is matlab...!-) -- yet she knows more about information and coding theory than I ever learned, and that's before any PhD level study (she's staying for her PhD, but that hasn't yet begun).
So, I claim these fields are more EE-y than CS-y (though of course the boundaries are ever fuzzy -- witness the fact that after a few years designing chips I ended up as a SW guy more or less by accident, and so did a lot of my contemporaries;-).
This question is the subject of coding theory.
Probably one of the better-known methods is to use Hamming code. It might not be the best way of correcting errors on large scales, but it's incredibly simple to understand.
There is the redundant encoding used in optical media that can recover bit-loss.
ECC is also used in hard-disks and RAM
The TCP protocol can handle quite a lot of data loss with retransmissions.
Either Turbo Codes or Low-density parity-checking codes for general data, because these come closest to approaching the Shannon limit - see wikipedia.
You can use Reed-Solomon codes.
See also the Sliding Window Protocol (which is used by TCP).
Although this includes dealing with packets being re-ordered or lost altogether, which was not part of your problem definition.
As Alex Martelli says, there's lots of coding theory in the world, but Reed-Solomon codes are definitely a sweet spot. If you actually want to build something, Jim Plank has written a nice tutorial on Reed-Solomon coding. Plank has a professional interest in coding with a lot of practical expertise to back it up.
I would go for some of these suggestions, followed by multiple sendings of the same data. So that way you can hope for different errors to be introduced at different points in the stream, and you may be able to infer the desired number a lot easier.
I am creating a game where I want to determine the intersection of a single line. For example if I create a circle on the screen I want to determine when I have closed the circle and figure out the points that exist within the area.
Edit: Ok to clarify I am attempting to create a lasso in a game and I am attempting to figure out how I can tell if the lasso's loop is closed. Is there any nice algorithm for doing this? I heard that there is one but I have not found any references searching on my own.
Edit: Adding more detail
I am working with an array of points. These points happen to wrap around and close. I am trying to figure out a good way of testing for this.
Thanks for the help.
Thoughts?
Your question has been addressed many times in the game development literature. It falls under the broad category of "collision detection." If you are interested in understanding the underlying algorithms, the field of computational geometry is what you want.
Bounding rectangle collision detection in Java
Collision detection on Stack Overflow
Circle collision detection in C#
Collision detection algorithms
Detailed explanation of collision detection algorithms
Game development books will also describe collision detection algorithms. One book of this sort is Game Physics by Eberly.