I am encoding a qr code in matlab using zxing paths. The encoded qr code is saved as jpg image. Then, it's finder patterns i.e the three squares are being detevted using centroid,area and bounding regions and one square is then colored with red color with red as background too. Now such three codes are being made and added together to form a single colored code.
Now, the region which I am detecting is not being detected properly due to which while decoding steps problem is being created.
Related
I have developed a framework in R to automatically measure vegetation structure variables from whiteboard photos taken on grasslands for ecological related studies. Until now we have preprocessed the images by hand, however now we need to automatise the rotation and cropping of the images.
The idea is the following: Use reference marks on the whiteboard, and detect these markings to rotate and crop the original photographs. I need help to detect the reference markings. After knowing the position of reference markings (centroids) we can calculate the coordinates/pixels where to crops the image. In the end, we want to get a picture like that.
We can use some special colour for the markings, but these can be obstructed by the vegetation. The bottom of the whiteboard is always obstructed, the cropped part (without the reference markings) should be 25×100 cm.
Possibly edge detection can be a solution. I'm familiar with only with R programming.
I'm currently working on project that uses json file with points & polygons.
All input data are a Lat/Lon format. I wish to draw these on a map (Which should be able to pan). I'm able to draw these objects on a JavaFx Pane or Canvas.
The issue is the data line between two coordinates is a straight line while it actually should follow stereographic projection .
I looked into ArcGis and other GeoTools but these tools all build upon tile maps something I don't need for my project.
You will have to create something what we call a LineDrawer. First you have to determine what line type you want to use. Standard line types are great-circle, rhumb and just straight lines. Second you have to define the projection you want to use. So if you now want to draw a line between two points A and B you have to split this up into small enough sections and compute intermediate points according to the formula for your chosen line type and then you have to project these points into your drawing pane.
But that's basically what every GIS software can do for you and you don't have to re-invent the wheel here.
I am currently working on some kind of OCR (Optical Character Recognition) system. I have already written a script to extract each character from the text and clean (most of the) irregularities out of it. I also know the font. The images I have now for example are:
M (http://i.imgur.com/oRfSOsJ.png (font) and http://i.imgur.com/UDEJZyV.png (scanned))
K (http://i.imgur.com/PluXtDz.png (font) and http://i.imgur.com/TRuDXSx.png (scanned))
C (http://i.imgur.com/wggsX6M.png (font) and http://i.imgur.com/GF9vClh.png (scanned))
For all of these images I already have a sort of binary matrix (1 for black, 0 for white). I was now wondering if there was some kind of mathematical projection-like formula to see the similarity between these matrices. I do not want to rely on a library, because that was not the task given to me.
I know this question may seem a bit vague and there are similar questions, but I'm looking for the method, not for a package and so far I couldn't find any comments regarding the method. The reason this question being vague is that I really have no point to start. What I want to do is actually described here on wikipedia:
Matrix matching involves comparing an image to a stored glyph on a pixel-by-pixel basis; it is also known as "pattern matching" or "pattern recognition".[9] This relies on the input glyph being correctly isolated from the rest of the image, and on the stored glyph being in a similar font and at the same scale. This technique works best with typewritten text and does not work well when new fonts are encountered. This is the technique the early physical photocell-based OCR implemented, rather directly. (http://en.wikipedia.org/wiki/Optical_character_recognition#Character_recognition)
If anyone could help me out on this one, I would appreciate it very much.
for recognition or classification most OCR's use neural networks
These must be properly configured to desired task like number of layers internal interconnection architecture , and so on. Also problem with neural networks is that they must be properly trained which is pretty hard to do properly because you will need to know for that things like proper training dataset size (so it contains enough information and do not over-train it). If you do not have experience with neural networks do not go this way if you need to implement it yourself !!!
There are also other ways to compare patterns
vector approach
polygonize image (edges or border)
compare polygons similarity (surface area, perimeter, shape ,....)
pixel approach
You can compare images based on:
histogram
DFT/DCT spectral analysis
size
number of occupied pixels per each line
start position of occupied pixel in each line (from left)
end position of occupied pixel in each line (from right)
these 3 parameters can be done also for rows
points of interest list (points where is some change like intensity bump,edge,...)
You create feature list for each tested character and compare it to your font and then the closest match is your character. Also these feature list can be scaled to some fixed size (like 64x64) so the recognition became invariant on scaling.
Here is sample of features I use for OCR
In this case (the feature size is scaled to fit in NxN) so each character has 6 arrays by N numbers like:
int row_pixels[N]; // 1nd image
int lin_pixels[N]; // 2st image
int row_y0[N]; // 3th image green
int row_y1[N]; // 3th image red
int lin_x0[N]; // 4th image green
int lin_x1[N]; // 4th image red
Now: pre-compute all features for each character in your font and for each readed character. Find the most close match from font
min distance between all feature vectors/arrays
not exceeding some threshold difference
This is partially invariant on rotation and skew up to a point. I do OCR for filled characters so for outlined font it may have use some tweaking
[Notes]
For comparison you can use distance or correlation coefficient
I have an image which I would like to extract the number but in a dynamic way (I don't want to specify a roi because image may vary) so I have to filter it. I tried to detect the horizontal line(to crop the image) but it failed. I would like to detect high density zones in the binary image (the face and the top of the image)
ps:my problem isn't how to extract numbers but to specify the roi
and all the images have the same format
any help would be appreciated(even without code just the big lines)
thanks
the image
I would start form detecting frame of the whole document.
If you google: rectangle detection opencv, you will find lots of examples.
In second stage i would apply inRange to filter purple line and detect it with HoughLines.
This should be enough to calculate ROI.
these vector or raster files being classic files without geocoordinates. They are lat/long projection, I want to import them into QGIS, scale them up/down, place them to their right place, and they become reusable shp or raster geocoordinated layers.
Edit: I'am from the wikipedia Graphic Lab>Map workshop, we want to work more using GIS. We litteraly have hundreds maps to migrate to GIS technologies....
File:Chinese_plain_5c._BC-en.svg
File:Vignobles_basse_loire.svg
Partial Solution: load SVG into Inkscape, Save as DXF file, then you can load this into QGIS. This should at least get you most of the linework into QGIS.
However, it won't yet be properly georeferenced or styled, and different layers may be in different places because the SVG has some scaling and translating operators on parts of the map data that QGIS or Inkscape is ignoring. You'll probably need to work with a layer at a time. This probably isn't a problem since maybe you are only interested in the added data on the map, and not the base map (country outlines etc) since you will probably want to overlay your data onto standard map base layer (natural earth, OpenStreetMap tiles).
The only way I see to do the transformation at present is to work out the affine transformation parameters and use the QgsAffine plugin, but that does require you to work out the parameters beforehand by fitting known source coordinates to known target coordinates.
But to do hundreds? You might be better off writing some custom SVG parsing code for each one...
If you only want to display it in the correct place, scale and rotation treat it as an SVG icon.
1. create a point layer and put a single point at the georeferenced centre of the SVG you will load.
2. edit the symbology and load the SVG as an icon
3. set the size units to map units
4. supply the appropriate dimensions
5. rotate as necessary
The redraw is very slow and painful, but if you use Project>import/Export>Export map as image you can make a georeferenced raster.