Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My understanding in that a QR Code contains the data that is being read, and it does not require an internet connection to interpret the code. If this is the case, why do I get a different QR Codes every time I recreate a new QR with the same data?
I see definite differences if I use two different generators to create the same code. For instance, creating a URL link to http://www.yahoo.com creates two different QRs on these sites:
http://qrcode.kaywa.com/
http://zxing.appspot.com/generator/
Mind that QR codes may use 4 different levels of error correction, labeled L, M, Q and H, respectively. Also, there is a process called masking, with the intention to increase the robustness of the reading process by distributing the black and white pixels over the image. There are also a number of masking patterns available, which can produce a valid QR code, but with different results. Read the specification for more info on those.
That being said, given a generator with the same settings, the output should always be the same, which is what your original question was about. Now, comparing two different generators might result in observing two different images due to the effects mentioned above.
Spec link, randomly picked off of Google (I'm mentioning this because ISO is selling the QR specification as a standard document):
http://raidenii.net/files/datasheets/misc/qr_code.pdf
The two sites might use two different versions of the QR code standard.
This picture shows that certain areas of the code hold information about the version and format used, so two QR codes might differ in those areas. I really don't know how QR codes work, but I assume that a different version or format would also mean that the rest of the data is ordered or encoded differently.
http://en.wikipedia.org/wiki/File:QR_Code_Structure_Example.svg
They are same... Google & Nokia
Kaywa is different on eye but contains same info.
Anyway, QRC is not different on every generation.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need to create a cooperative music identification service. Every user will have an option to fingerprint a song and send it to the server with its meta information. At the beginning the service database will be empty and every time a music fingerprint will be received meta data for the song will be updated (the server will assign meta data to a finger print based on majority choise if different user will send different information for the same fingerprint).
I need to calculate a fingerprint for the whole song, I do not need to identify a song from just a fraction.
The fingerprint should not be 100% accurate. I will be happy if two song file will receive the same fingerprint just if the same file is encoded with different compression rate. A low level of noise independence will be a plus.
Silence at the begining or the end of the song will be not a problem, I should remove them using standard silence suppression algorithm (and also in this case a do not need very precise result).
I know there are some opensource library like: http://echoprint.me/ and https://acoustid.org/ but these libraries are excessive for my needs, because if I understood correctly they can identify a song from just a part, and this will create a heavy database. I need an algorithm that will give me a not too heavy (some kb) fingerprint for the whole song.
Which is the simplest and fastest algorithm I can use?
Thanks to all
I suggest you use the AcoustID project. Your description matches this project on a lot of points. Only some of their approaches are different from what you suggest.
Can the service identify short audio snippets?
No, it can't. The service has been designed for identifying full audio
files. We would like to eventually support also this use case, but
it's not a priority at the moment. Note that even when this will be
implemented, it will be still intended for matching the original audio
(e.g. for the purpose of tracklisting a long audio stream), not audio
with background noise recorded on a phone.
Have a look at their mailing list for some better explanations: https://groups.google.com/forum/#!forum/acoustid
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Simplified Question:
Is it practical for a programmer to keep track of the addresses of variables, so that a variable's address can be used as a point of data on that variable?
Original Question:
I am attempting to wrap my head around how variables are stored and referenced by address using pointers in Go.
As a general principal, is it ever useful to assign a variable's address directly? I can imagine a situation in which data could be encoded in the physical (virtual) address of a variable, and not necessarily the value of that variable.
For instance, the 1000th customer has made a 500 dollars of purchases. Could I store an interger at location 1000 with a value of 500?
I know that the common way to do something like this is with an array, where the variable at position 999 corresponds to the 1000th customer, but my question is not about arrays, it's about assigning addresses directly.
Suppose I'm dealing with billions of objects. Is there an easy way to use the address as part of the data on the object, and the value stored at that location as different data?
for instance, an int at address 135851851904 holds a value of 46876, 135851851905 holds 123498761, etc. I imagine at this point an array or slice would be far too large to be efficient.
Incidentally, if my question due to a misunderstanding, is there a resource someone can provide which explains the topic in deep, but understandable detail? I have been unable to find a good resource on the subject that really explains the details.
is it ever useful to assign a variable's address directly?
You can use the unsafe package to achieve that but the idea is that you don't do it unless you have a concrete and otherwise unsolvable use-case that requires it.
Could I store an interger at location 1000 with a value of 500?
As mentioned before it is possible but choosing an arbitrary address won't get you far because it may not even be mapped. If you write to such a location you'll get a access violation (and your program will crash). If you happen to hit a valid address number you'll likely be overwriting other data that your program needs to run.
Is there an easy way to use the address as part of the data on the object, and the value stored at that location as different data?
In general no.
If you managed to build some kind of algebraic structure closed under the operations by which your own pointer-arithmetic is defined in a finite set of addresses which you can guarantee to always be a valid virtual memory segment then yes but it defeats the purpose of using a garbage collected language. Additionally it would be hell to read such a program.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
I'd like to know if it's possible to somehow triangulate (or otherwise) get a location of a moving object inside a defined area, let's say, 200m x 200m, by using radio waves.
I've been looking at some transceivers and the range shouldn't be impossible (budget doesn't really matter). What would i need? Is there some reading material out there about this?
What i thought about was having a few "Antennas" positioned around the defined area, listening for the RF signal from the moving object - and then somehow calculate the distance from the object to the antenna, and that way get the exact location of the object by combining the data from all antennas.
Is this somehow possible, anyone care to point me in the right direction?
Thanks a lot guys.
Edit: Forgot to mention that the accuracy wouldn't have to be so precise, maybe ~15cm?
Phased antenna arrays are used for beamforming: sending a signal in a certain direction and estimating direction of arrival.
DOA and a several antenna arrays could be used for Localization, which is what you are looking for. This source explains that 2D localization can be performed with 3 receivers using only the TDOA information.
I'm not sure if its practical or applicable to the problem you want to solve, just an avenue for investigation.
well, similar question was posted on this link, you can definitely give it a shot... https://electronics.stackexchange.com/questions/8690/signal-triangulation
Personally I think, if your target area is really like within 200m X 200m you can take a look at RFID based solutions. Passive RFID systems utilize something called a Received Signal Strength Indicator (RSSI) to determine how close an object is to an RFID reader. RSSI can't tell you the exact range, but you can surely find out if it is getting near or far. I have seen RFID systems being used to identify the loading of trucks in a given area roughly the same size as your requirement.
The only caution is if you are using multiple tags on an object for the directivity of target then RFID wont be so accurate as the RSSI level from different tags wont give a conclusive result.
Phased array system is highly accurate, but its a tad costly to implement.
You can find some reference documents in this article. It has good collection of RF ranging and Direction Finding manuals.
There's quite a lot of academic papers out there on this, e.g. scholar search and products, e.g. ekahau. The simplest to have a go with is probably trilateration with hardware that reports an RSSI, which you use to infer distance. Time difference of signal arrival is another level of accurate and tricky.
A lot of these techniques are quite sensitive to the environment: 15cm accuracy in open space, with sufficient receivers, is doable. If you add walls, furniture and people it gets harder. Then you need to survey the site for what a beacon in that location looks like; add in the variation depending on where a user with the device is (big bags of water block radio); and then interpolate between places.
You have an arduino tag on your question. I'm not sure what the significance of this is, but as above - do check what data you can get from your hardware.
I am not not sure about RF & Antennas, BUT, having multiple cameras ( whose relative position is known ) looking at the same object this can be achieved using structure from motion
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm running a suite of some 2000+ performance tests on our software for every code change that someone makes (and for each test I collect 5 to 10 samples). I have a history of performance results for thousands of code changes.
When someone makes a code change that causes the test to run slower, I want to be told as soon as possible (though I can wait for results from another 1 or 2 code changes).
That's the gist of the problem.
There is some natural variance in these tests, and we will see occasional spikes that are just noise, maybe because some background process on the computer was doing something that caused the test to run slower this time. I do NOT want to be notified when the test ran slower for such reasons. I understand there will still be some type I error, but I want to minimize it.
Almost all code changes have no real effect on performance, and those that do usually effect a subset specific tests.
But because essentially any code change throughout our history can have changed mean/standard deviation/whatever, using that history seems precarious.
But my problem seems like one that can't be completely unique. What options do I have?
This is a graph of how one of the tests performs over time. The y axis is represents time the test took (lower is better), and the x axis is each of our code changes over time from oldest to newest. That big drop early on should be called out as a real improvement, and when it goes back up, that was a real loss. Likewise towards the end of the graph, that was a real loss, followed by a real gain. All the other blips should NOT be called out.
Here's another one where the history is mostly all just noise
I've asked this question in multiple places, but have never gotten any real answers. I will be writing all of the analysis, and I'm willing to use any tool, do any research, and learn any statistical methods that will help me. This can't be a unique problem. So how to people handle it (other than manually looking through results)?
Firstly, you can try to decrease amount of noise. You can measure time in such a way that background processes do not affect your measurement (unix time)
You would like to see if there is overall trend in your performance measure that indicates decrease in performance. If you look at it as a signal, you can apply low-pass filter (which can be simply approximated by taking k previous samples and averaging). Then use some simple threshold. This is quite simple, but I think should work, because as #nograpes wrote it is often the case that drops in performance are big.
If it doesn't work you can look at it as the problem of "trend detection". So basically the question is "is there significant trend in the time series". This is a machine learning problem/statistics problem : https://en.wikipedia.org/wiki/Trend_estimation . So you could get N previous samples, try to fit ARMA (http://www.nek.lu.se/nekkfr/d-kurs/Ch4NEWunivariate.pdf) and see if the slope is positive with some threshold. However I do not know much of this method so it is just an idea :)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
How can I analyze broken/partial QR codes? Normally a QR decoder will just tell you that the data can not be read. This is not very useful. Even though the code is not readable, some information can, presumably, be extracted!
Is the finder patterns found?
Is the timing pattern found?
What is the version?
What is the error level?
What is the mask?
Is the format intact?
What is the mode?
Is the stop pattern found after the correct length?
Is there any meaningful data?
How can I extract this information from broken/partial QR codes?
This is a question that comes up in many ways; some easier than others.
To answer your direct question: The tool you need: Your brain.
Software can help but to decode partial or misprinted codes takes some work. It is like detective work. You need to take what you have and fill in what you know about the way they are created in the first place, then make educated guesses for the win.
Here is a tour of the concept. By looking at these articles most of the items on your bullet-point list will be answered.
This article explains the overall format in good detail:
Wounded QR Codes
For instance, here is the first image in the article about formatting:
Here is a real-world example of the process of decoding a partial image:
Decoding a partial QR code
It begins with the challenge image
Then shows you the order of bits that are encoded:
Then through the process of detective work to produce the final image:
Here is a different problem. You have a full image but it won't scan properly so you have to decode it by hand:
Decoding small QR codes by hand
It starts out with a tattoo:
Which is in the wrong orientation, and also won't scan properly.
So you work through the decoding process:
Yielding the final result: Maci Clare Peltz
Have fun detecting!
You can simply hack some open source code like zxing to print out its progress on a command line during decoding and in that way see how far it got. Just sprinke in a few System.out.println() statements.
The problem is false positives. It will almost always find at least 3 regions that look like a QR code's finder patterns; it always takes the 3 most likely candidates. They usually are phantoms since you're usually not looking at a QR code. The next step would then fail, finding valid version info. (In a very unlikely case it would even find phantom version info.)
Some of these aspects you mention aren't necessarily detected by a library since they don't have to be, like timing pattern and stop pattern (which isn't required for short data).
Aside from those caveats, should be easy.