Analyze partial or corrupted QR codes [closed] - decode

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
How can I analyze broken/partial QR codes? Normally a QR decoder will just tell you that the data can not be read. This is not very useful. Even though the code is not readable, some information can, presumably, be extracted!
Is the finder patterns found?
Is the timing pattern found?
What is the version?
What is the error level?
What is the mask?
Is the format intact?
What is the mode?
Is the stop pattern found after the correct length?
Is there any meaningful data?
How can I extract this information from broken/partial QR codes?

This is a question that comes up in many ways; some easier than others.
To answer your direct question: The tool you need: Your brain.
Software can help but to decode partial or misprinted codes takes some work. It is like detective work. You need to take what you have and fill in what you know about the way they are created in the first place, then make educated guesses for the win.
Here is a tour of the concept. By looking at these articles most of the items on your bullet-point list will be answered.
This article explains the overall format in good detail:
Wounded QR Codes
For instance, here is the first image in the article about formatting:
Here is a real-world example of the process of decoding a partial image:
Decoding a partial QR code
It begins with the challenge image
Then shows you the order of bits that are encoded:
Then through the process of detective work to produce the final image:
Here is a different problem. You have a full image but it won't scan properly so you have to decode it by hand:
Decoding small QR codes by hand
It starts out with a tattoo:
Which is in the wrong orientation, and also won't scan properly.
So you work through the decoding process:
Yielding the final result: Maci Clare Peltz
Have fun detecting!

You can simply hack some open source code like zxing to print out its progress on a command line during decoding and in that way see how far it got. Just sprinke in a few System.out.println() statements.
The problem is false positives. It will almost always find at least 3 regions that look like a QR code's finder patterns; it always takes the 3 most likely candidates. They usually are phantoms since you're usually not looking at a QR code. The next step would then fail, finding valid version info. (In a very unlikely case it would even find phantom version info.)
Some of these aspects you mention aren't necessarily detected by a library since they don't have to be, like timing pattern and stop pattern (which isn't required for short data).
Aside from those caveats, should be easy.

Related

Is it possible to compile R scripts into a binary?

I've done some research online but I haven't been able to come up with any answer. I know this has been asked at least thrice, as I've viewed those posts, linked here:
First Question
Second Question
Third Question
However, it's been 5, 7, and 9 years since those questions have been asked, and technology is obviously rapidly evolving :) I don't know much about R, and I haven't worked with it for a long time, and so I ask those of you who know better and have more experience if you know of anything that would be useful to me.
If there's nothing that exists now, how hard would it be to create? The reason I ask is that the company I work for would like to obfuscate the proprietary code before it goes out. I would have the full 40 hours a week to work on creating it, and so time and/or difficulty isn't a major concern.
Thanks!
Found this: I'm not sure about the security, but this is definitely a deterrent and would take (I think) some fairly concentrated effort to crack. There is a byte code compiler for R based on the paper linked below. There is a method in library(compiler), which comes standard with R, that allows you to compile an R script to byte code. In the same library, you can load in the source files and use them as you'd like.
A Byte Code Compiler for R

Create a "Did You Mean...?" type of search in ASP.NET with VB.NET and SQL? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
So, I have a website with a search bar.
I have only figured out how to get results when they match (at least in part using Like %searchterm%) and it works.
Obviously, this does not help me if the user misspells something.
We have discovered through HeatMapping that we are losing people on this.
How can I implement a "smarter" search feature?
Thanks,
Yoni
The "real" solution that you are looking for might be more complicated than you think. You could use simpler solution that will work fine like using the DIFFERENCE function.
I am trying to leave a comment, but not able to. So i have to leave it here, and it might not be the ideal answer Yoni is expecting. I can think of two ways of doing it.
use asp.net autocomplete function. It will query the db, and feed the user back with suggested search results dynamically when users are typing which will prevent users mistyping somehow. many search engines use it frequently like Google, Yahoo. In asp.net, its very easy to wire it up.
ASP.NET Auto Complete
This is how Auto complete looks like
add a class to re-process the search terms before querying the database, so you wont get 0 hit if users mis-spell or mis-type something. This is very broad, and it varies a lot depending on your business model concept.
Hope it helps. :)
This is quite a complex matter, impacting on both coding complexity and query performances.
Of course there may be a lot of approaches to achieve the results you ask for.
Personally, I would start by working with aliases: for each word that user may search for, I would create a set of aliases, that may be related to word semantic value or to mistyping of the word itself, eg:
Word: sheet
Aliases: paper, shet, shee ...
So, each single word must be indexed (and this could be a cumbersome aspect to deal with, depending on your contents), and for each woed there may be many aliases.
Then apply a sequential logic like the following:
1 - standard search, as the one you already did
-> if nothing matches
2 - alias search
-> if nothing matches
3 - start "playing" with wildcard characters (this could definitely kill your db however)
I understand this is a quite generic answer, but I don't think there may be an absolutely good approach - performance wise - to your question.

Common Lisp package for parsing invalid HTML? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
As a learning exercise, I'm writing a web scraper in Common Lisp. The (rough) plan is:
Use Quicklisp to manage dependencies
Use Drakma to load the pages
Parse the pages with xmls
I've just run into a sticking point: the website I'm scraping doesn't always produce valid XHTML. This means that step 3 (parse the pages with xmls) doesn't work. And I'm as loath to use regular expressions as this guy :-)
So, can anyone recommend a Common Lisp package for parsing invalid XHTML? I'm imagining something similar to the HTML Agility Pack for .NET ...
The "closure-html" project (available in Quicklisp) will recover from bogus HTML and produce something with which you can work. I use closure-html together with CXML to process arbitrary web pages, and it works nicely. http://common-lisp.net/project/closure/closure-html/
For next visitors: today we have Plump: https://shinmera.github.io/plump
Plump is a parser for HTML/XML like documents, focusing on being lenient towards invalid markup. It can handle things like invalid attributes, bad closing tag order, unencoded entities, inexistent tag types, self-closing tags and so on. It parses documents to a class representation and offers a small set of DOM functions to manipulate it. You are free to change it to parse to your own classes though.
and them we have other libs to query the document, like lquery (jquery-like) or CLSS (simple CSS selectors) by the same author.
We also now have a little tutorial on the Common Lisp Cookbook: https://lispcookbook.github.io/cl-cookbook/web-scraping.html
See also Common Lisp wiki: http://www.cliki.net/Web
Duncan, so far I've been successful using Clozure Common Lisp under both Ubuntu Linux and Windows (7 & XP), so if you're looking for an implementation that will work anywhere you might try this one.

Why Don't Duplicate QR Codes Look The Same? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
My understanding in that a QR Code contains the data that is being read, and it does not require an internet connection to interpret the code. If this is the case, why do I get a different QR Codes every time I recreate a new QR with the same data?
I see definite differences if I use two different generators to create the same code. For instance, creating a URL link to http://www.yahoo.com creates two different QRs on these sites:
http://qrcode.kaywa.com/
http://zxing.appspot.com/generator/
Mind that QR codes may use 4 different levels of error correction, labeled L, M, Q and H, respectively. Also, there is a process called masking, with the intention to increase the robustness of the reading process by distributing the black and white pixels over the image. There are also a number of masking patterns available, which can produce a valid QR code, but with different results. Read the specification for more info on those.
That being said, given a generator with the same settings, the output should always be the same, which is what your original question was about. Now, comparing two different generators might result in observing two different images due to the effects mentioned above.
Spec link, randomly picked off of Google (I'm mentioning this because ISO is selling the QR specification as a standard document):
http://raidenii.net/files/datasheets/misc/qr_code.pdf
The two sites might use two different versions of the QR code standard.
This picture shows that certain areas of the code hold information about the version and format used, so two QR codes might differ in those areas. I really don't know how QR codes work, but I assume that a different version or format would also mean that the rest of the data is ordered or encoded differently.
http://en.wikipedia.org/wiki/File:QR_Code_Structure_Example.svg
They are same... Google & Nokia
Kaywa is different on eye but contains same info.
Anyway, QRC is not different on every generation.

How can I visualize Fortran (90 or later) source code, e.g. using Graphviz? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
I've been thrown into a large Fortran project with a large number of source files.
I need to contribute to this project and it would seem prudent that I first understand the source.
As a first step, I'd like to visualize the interdependences between the various source files, i.e. which source files need which modules. As far as I can tell, automated methods exist for other languages and result in a graph that can be built using Graphviz.
But is anyone aware of software out there that can do this for Fortran 90 code?
[Searching the interwebs for Fortran help is a real pain as you end up searching the inter-cobwebs thanks to the painfully ubiquitous FORTRAN 77.]
I would recommend doxygen, which automatically generates documentation from source code (and is free). Usually you add some markup to comments describing your functions and variables. However, you can just run doxygen on undocumented source files, provided you set EXTRACT_ALL to YES in the configuration file, and have it create create relationship diagrams for all your functions (i.e. this function call these functions and is called by these other functions).
You need GraphViz installed to get diagrams generated and have the HAVE_DOT option set to YES in the configuration file.
See the doxygen documentation for graphs and diagrams for more information and this example class documentation for a example of the output generated.
Edit: Of course for Fortran you should set the OPTIMIZE_FOR_FORTRAN option to YES in the configuration file.
If you have money then Understand for Fortran is worth looking at. If you don't have money but intend to work quickly then you might get by with a trial download of the software.
For a static call graph, I've never found a free tool as useful as Understand; it's hard to find any free tools let alone a useful one. I'd write one myself but the market would be tiny :-(
For a dynamic call graph investigate your compiler options. I use the Intel Fortran Compiler which can generate a mound of useful information about an executing program. The TotalView debugger can also visualise the call graph of an executing program. You should also look at gprof2dot which makes a DOT file out of a GPROF call 'graph'. This is free and OK.
And I should also add, though it's not something I've ever used, that Callgrind may be of use.
You can use callgrind from within Valgrind:
valgrind --tool=callgrind [your program]
This will produce a callgrind.out.[pid] file. This works best if you compile your program without optimisations, and with debug flags.
You then have a couple of options for viewing the data:
Convert the callgrind output to a .dot file with grof2dot, and then view it with xdot, or convert it to a static graph with GraphViz.
View it directly with Kcachegrind (includes source analysis, and call graphs).

Resources