Azure Cognitive Services RecognizePrintedText vs. RecognizeText - azure-cognitive-services

I've started working with the azure cognitive services computer vision apis. I'm not sure what the difference is between RecognizePrintedText vs. RecognizeText. It seems that recognize text returns printed text even when I send in the argument 'handwritten'. The output is similar between the two endpoints, so I'm not sure what the different use cases are. Any ideas?
Cheers,

Are you saying mode=Handwritten/Printed not working, if yes then following post is talking about it but there is no answer to it so far.
How to get only handwritten text as results from Microsoft handwriting recognition API?

I think the difference is that the printed mode is optimized for printed text and the handwritten mode is optimized for handwritten text.
With that said, it doesn't mean that the handwritten mode will only get you handwritten text, but the handwritten mode will likely get you more of the handwritten text if compared to the printed mode (in general, of course there will be cases where this does not hold.)
I have been working with both tools when scanning printed text to draw a comparison and both retrieve the text, although with different levels of quality.

Related

Is there a way in Windows 10 to convert a hexadecimal code to its symbol regardless of the program?

I've read many pages that point out that many office applications allow for this by typing the code followed by Alt + X, but frequently, I want to insert a symbol when I'm not in one of those applications. Is there a universal way to achieve this?
The character map is useless, unless you have time to manually search through all the characters available.
I posted the question at Super User, and basically, the response I got there was to use Alt codes for the symbols. However, I discovered that, on the whole, these only work for the first 256 Alt codes. So basically, the answer to my question is "No, there's not a good way."

Google Cloud Vision API - OCR does not return symbols entries

We are working with a scientific PDF documents created in Asian languages like Japanese and Chinese. We are using DOCUMENT_TEXT_DETECTION feature type for Cloud Vision API to get text from this documents as it is suggested in documentation. We have requirements to highlight blocks, words and characters (symbols) in our web application and let user to further process the highlighted areas on pdf preview. We can not always display highlights for symbols because of lack of boundingBox property in the response for symbols.
Based on API documentation this property should be present in the response. There is no mention about such option.
The questions are:
When boundingBox for symbols is available?
What we can do to enforce this property in the response?
Which factors decide about availability of this property in the response?
I think your issue is related with known issue for not receiving bounding boxes for symbols.
This issue was already reported to google by Issue Tracker and it can be found here.
You can track status of this issue and provide your example there which can help analyze and fix this issue faster.

speed of different ways of a full-text search in XQuery (BaseX)

I am using BaseX 9.3.1 for accessing about 45.000 entries of a dictionary with about 1 million examples etc., having a full-text index as well as one for all text nodes and attributes. So far, so bad. Accessing the nodes for grammar or the meaning is quite fast. Accessing the examples, however, is very slow but only in some queries - not in all. I tried to find the reason for the following but simply did not succeed with the documentation of BaseX or some XQuery-pages etc. Maybe I just don't see the forest for the trees. Here's the issue:
ft:search("myDatabase", "mySearchString")
is very fast and responding in less then a second, whereas
ft:contains(db:open("myDatabase")/path/to/target/node, "mySearchString")
as well as
db:open("myDatabase")/path/to/target/node contains text "mySearchString"
and
db:open("myDatabase")/path/to/target/node[contains(./text(), "mySearchString")]
is so slow (>30s) and heavy that it sometimes even breaks the server.
I tried to find out the differences by changing options for stemming or the collation but it did not make a noticable difference regarding speed. ft:contains() is - according to the docs - an extended version of contains text of standard XQuery. ft:search seems to be a BaseX command for full-text search (see BaseX docs).
ft:search() is working quite well with some additional code in order to get the desired content. But that's actually more like a workaround. I need to understand why directly accessing the path is so slow even though there also is an index for it. Using text() instead of any version of contain or match is faster, too, but not an option for the application since the search must fetch "parts of" and not "exactly" the text content.
Any suggestion/idea/help is appreciated!

Google Translate API Pricing and Language auto-detect Effeciency

I have the following three questions
I want to use Google's API to translate text. I know that Google charges separately for translation and detection. Google translate also supports translation two ways to translate
i) By specifying both source and target, as in
https://www.googleapis.com/language/translate/v2?key=INSERT-YOUR-KEY&source=en&target=de&q=Hello%20world&q=My%20name%20is%20Jeff
ii) By specifying just the target, where the source us auto-detected,
like this https://www.googleapis.com/language/translate/v2?key=INSERT-YOUR-KEY&target=de&q=Hello%20world
My question is, if i call the API as in the second example, will I be charged for both detection and translation or just translation?
Is it more efficient when you specify both source and target than when you just the target, or, are there any downsides of using the second way above?
How many words should be sent to Google Translate API to detect a language reliably?
Thanks
I pretty much translate using the second approach most of the time (not informing to google the source language) and they only charge for the translation, not for the detection.
However, you must be aware of the fact that, in case your source text is of the same language as your target language, google will attempt to translate it anyways, and sometimes it leads to confused results, or at least a translation which was not necessary, since you already had the text in the desired language.

LaTeX equivalent to Google Chart API

I'm currently looking at different solutions getting 2 dimensional mathematical formulas into webpages. I think that the wikipedia solution (generating png images from LaTeX sourcecode) is good enough until we get support for MathML in webbrowsers.
I suddenly realized that it might be possible to create a Google Charts API equivalent for mathformulas. Has this already been done? Is it even possible due to the strange characters involved in LaTeX-code?
I would like to hit an url like latex2png.org/api/?eq="E = mc^2" and get the following response:
edit:
Thanks for the answers sofar. However, I am already aware of several tools to generate png images from latex source code (both online and from my commandline), but what I was looking for was a simple way to get the image via an Http GET request. Perhaps such a service does not exist.
Update
As #hughes (and others) pointed out, the previous Google Chart API has been deprecated.
The example I wrote still works as of Sept 2015, but a new one shall be used now (documentation):
Old answer
Google Chart can do it (Documentation):
http://chart.apis.google.com/chart?cht=tx&chl=%5CLaTeX
I'm using this with Google Docs, because it doesn't support math yet.
chart.apis.google with background color changed
https://chart.apis.google.com/chart?cht=tx&chf=bg,s,FFFF00&chl=%0D%0A4x_0%5CDelta%28x%29%2B3%5CDelta%28x%29%2B2%5CDelta%28x%5E2%29%3E0%0D%0A
or chart.apis.google with background color transparent and resized
For better readability URL needs to be decoded.
https://chart.apis.google.com/chart?cht=tx&chs=428x35&chf=bg,s,FFFFFF00&chl=
4x_0\Delta(x)+3\Delta(x)+2\Delta(x^2)>0
Data structure looks like this
{
"cht":"tx",
"chs":"428x35",
"chf":"bg,s,FFFFFF00",
"chl":"n4x_0\Delta(x) 3\Delta(x) 2\Delta(x^2)>0"
}
https://chart.apis.google.com/chart?cht=tx&chs=428x35&chf=bg,s,FFFFFF00&chl=%0D%0A4x_0%5CDelta%28x%29%2B3%5CDelta%28x%29%2B2%5CDelta%28x%5E2%29%3E0%0D%0A
You could try the Online image generator for mathematical formulas for a start.
mathurl is a mathematical version of TinyURL.com. It allows you to reference LaTeXed mathematical expressions using a short url. For example, http://mathurl.com/?5v4pjw will show [LaTeX output Image] which you can then edit. More details on mathurl’s help page
I just ran across MathJax on Ajaxian [via Wayback Machine]:
MathJax seems to have a chance at being a practical solution that offers a high quality display of LaTeX and MathML math notation in HTML pages.
The output is remarkably beautiful, and it's all pure HTML and CSS, which makes it scalable and selectable. Performance is currently a bit sluggish, but this is recognized.
As everyone has said, there are many services that do this already. Here is another easy one that I've used a number of times (and you can install it locally on your server if necessary):
http://www.codecogs.com/components/equationeditor/equationeditor.php
I'd take a good look at how the MediaWiki LaTeX support does it and borrow from there.
Please check out this site for a way to create TeX documents without any software installed. You can then snippet the result image with any screen capture method and embed the resulting image into a any website.
Go to http://sharelatex.com
The software is free to use, but you need to register to create documents.

Resources