I've encountered the following issues with the text detection feature of Google Cloud Vision:
1) I submit the same image using the same Python code to Google Cloud Vision, from 2 different machines (the Windows-based development machine and the Linux-based "production" machine) but I get 2 different outputs. Same image, same code, same libraries, but the extracted text is different.
2) I get two differently detected locales for the two differently detected texts. My original text is Italian text mixed with digits. On the development machine, the detected locale is "zh" (Chinese). On the "production" machine, the detected locale is "fil". There isn't any "fil" code in https://cloud.google.com/translate/v2/using_rest#language-params so I don't know what it is (Filipino?). In any case, I get a better result on the development machine when the detected locale is "zh". So... same image, same code, but differently detected locale and differently detected text.
3) Therefore I try to force the "it" or "zh" locale using the ImageContext languageHints annotation https://cloud.google.com/vision/reference/rest/v1/images/annotate#AnnotateImageRequest and now there's the funny thing: if I set languageHints to ['it'] on the development machine, I get almost nothing as output from Google Cloud Vision. If I set it to ['ja'] (Japanese), Google Cloud Vision says that the text locale is "it" (!!) and I get some decent results (!!!). But if I set ['ja'] on the "production" machine, Google Cloud Vision says that the text locale is "oc" (?). So... same image, same code, but differently detected locale and differently detected text. Moreover, the detected locale and text don't follow what I set with languageHints, but when I change the languageHints parameter, the detected locale (and text) also changes in an unpredictable way.
Do you have any hint? Thanks.
Related
Some specific IFC files load correctly on web-ifc-viewer, but didn't load on web-ifc-three.
When the file doesn't load, the browser tab freezes. The performance monitor shows 100% of the CPU is used.
After some time, the tab stops responding, and it takes about 10 sec to close it.
Here is the IFC file
It's work on viewer-demo
but didn't work on helloworld
In the console, I can see :
"web-ifc: 0.0.34 threading: 0"
"No basis found for brep!"
"unexpected style type"
Any suggestions will be thankful.
IFC.js has 2 different boolean engines right now: FAST and SLOW. Boolean operations are one of the most complex features in geometry generation in IFC, and sometimes the SLOW procedure is unable to compute the geometry of certain meshes. This will get solved eventually with both procedures.
You can use the fast boolean procedure (which is somewhat less precise in certain meshes but should never crash) changing the configuration of IFC.js like this:
ifcLoader.IfcManager.applyWebIfcConfig ({ USE_FAST_BOOLS: true });
This is unrelated to the messages you see in the console. They are warnings (not errors) of elements that have not been yet implemented in web-ifc. They will also disappear as the library progresses, but you should still be able to see the models and work with them.
I am sending commands over a serial port from a Linux embedded device to some serial enabled firmware. For easy debugging and simplicity we are using ascii human-readable commands terminated by newlines. Is it appropriate to use canonical mode here, or is canonical mode usually reserved for interactive terminals? The examples I find online all use raw mode.
In particular, in canonical mode, how do I check without blocking if an entire line is available for reading.
according to Linux Serial Programming documentation :
This is the normal processing mode for terminals, but can also be useful for
communicating with other dl input is processed in units of lines, which means
that a read will only return a full line of input. A line is by default
terminated by a NL (ASCII LF), an end of file, or an end of line character. A
CR (the DOS/Windows default end-of-line) will not terminate a line with the
default settings.
Canonical input processing can also handle the erase, delete word, and
reprint characters, translate CR to NL, etc..
First
using canonical mode for serial communications is the best option, because we have Linux kernel support on data transmission and system handlers that will help to better reading serial text
Second
if you want to use canonical mode , make sure that you are using the right character for end of line in your device that sending data , other way you cannot use canonical feature
Is it appropriate to use canonical mode here, or is canonical mode usually reserved for interactive terminals?
Yes, you can use canonical mode, but you will need to configure the termios interface for your situation.
The default termios configuration is for an interactive terminal, so features such as echoing the input should be disabled.
Since your device is unlikely to send the backspace and delete characters, such features can be ignored.
The examples I find online all use raw mode.
Seems like there are some "experts" that are not aware that canonical mode for terminals exists.
See the comments to reading from serial port in c breaks up lines .
For an example of (blocking) canonical mode, see this answer (note that there's another "expert comment" telling the OP that he cannot read lines).
In particular, in canonical mode, how do I check without blocking if an entire line is available for reading.
You could use select().
The man page implies that a canonical read of a terminal device is supported:
The file descriptors listed in readfds will be watched to see if characters become available for reading (more precisely, to see if a read will not block ...)
When both fields of the timeval structure are zero, then select() will not block and returns immediately.
In R, I like to use reverse search (ctrl+r) to redo infrequent but complex commands without a script. Frequently, I will do so many other commands in between that the command history discards the old command. How can I change the default length of the command history?
This is platform and console specific. From the help for ?savehistory:
There are several history mechanisms available for the different R
consoles, which work in similar but not identical ways...
...
The history mechanism is controlled by two environment variables:
R_HISTSIZE controls the number of lines that are saved (default 512),
and R_HISTFILE sets the filename used for the loading/saving of
history if requested at the beginning/end of a session (but not the
default for these functions). There is no limit on the number of lines
of history retained during a session, so setting R_HISTSIZE to a large
value has no penalty unless a large file is actually generated.
So, in theory, you can read and set R_HISTSIZE with:
Sys.getenv("R_HISTSIZE")
Sys.setenv(R_HISTSIZE = new_number)
But, in practise, this may or may not have any effect.
See also ?Sys.setenv and ?EnvVar
Take a look at the help page for history(). This is apparently set by the environment variable R_HISTSIZE so you can set it for the session with Sys.setenv(R_HISTSIZE = XXX). I'm still digging to find where you change this default behavior for all R sessions, but presumably it will be related to .Startup or your R profile.
?history
"There are several history mechanisms available for the different R
consoles, which work in similar but not identical ways. "
Furthermore there may even be two history mechanism in the same device. I have .history files saved from the console and the Mac R GUI has its own separate system. You can increase the number of GUI managed history entries in the Preferences panel.
There is an incremental history package:
http://finzi.psych.upenn.edu/R/library/track/html/track.history.html
In ESS you can set comint-input-ring-size:
(add-hook 'inferior-ess-mode-hook
'(lambda()
(setq comint-input-ring-size 9999999)))
When I am using the terminal, I will often press, for example, alt plus an arrow key, and instead of doing what I expect (usually moving the cursor) I get a "Control Character" like [[A; or something like that. What letters I get seems to be very context sensitive, its different if there is a program running in the foreground vs. actually at a prompt, its different during the boot sequence, and it seems to be different depending on which UNIX I'm using. I know what a few of them are for (^C, ^D, ^Z) but I have no idea what the rest are used for or where they came from, or why they haven't been gotten rid of yet.
I'm building a secure payment portal.
We currently have two applications that will be using this. One is a web application, the other a desktop app. Both of these require users to login/authenticate, the same credentials can be used for either application.
I want to build an automatic login mechanism that will fill in all the various login/order details and be able to call this from either app mentioned above. I've been thinking that the best way to do this is to pass this information encrypted through the URL. ie https://mysite.com/TakePayment.aspx?id=GT2jkjh3....
Since we don't want to integrate the payment processing too tightly into the desktop app to reduce our PCI scope, we decided to have it open the browser to a central, secured payment page through a simple shell execute with the full URL causing the default browser to open that page.
Originally we were using AES for the encryption, but this is currently being re-examined as we would prefer not having to give out the key to the end user (AES is symmetric, symmetric encryption = both parties need the private key, why bother even encrypting then since we're going to be distributing the app?) So I'm looking at switching it over to use Public Key Encryption with the built in RSA routines within .NET
After coding up the RSA portion I noticed most examples on the net used 1024bits for the key-length, I went with this and now have our portal working with public key encryption, however the URLs generated are much much longer than when I was using AES so it made me start researching what the max limits for URLs are. http://www.boutell.com/newfaq/misc/urllength.html Says that IE is the limiting browser at about 2048 characters in the path portion. My initial tests with the RSA encryption show my urls will be around 1400 chars long.
My questions boil down to this:
1) Is there a better way for passing information from a desktop app to a website that I'm not thinking of? I'd prefer it be just as easy to use from another web page as it is from the desktop, hence my current solution.
2) Is 1024 bit RSA keys necessary? Or overkill for something like this? A shorter key would mean shorter encrypted text right?
3) Are there any other unforeseen problems with URLs in the 1200-1400 character range? Proxies? Firewalls? Web-Accelerators?
Thanks
Update 12/11/2011:
Come to find out, the method that we ended up going with here ended up biting us in the ass recently (or rather we found out about it today, even though the problem was a very sporadic and difficult one to track down..)
The plain text token that we encrypted was originally rather small, only a hundred bytes or so. This is what resulted in my test URLs being approximately 1400 bytes long. Through feature creep we've been required to add more data to the token, and the average URL length jumped to 1700-1800 in length.
Once the length of our plain text hits 173 characters long and above however, the URL length jumps again, this time up to 2080+ or so, which now causes problems for IE. After some investigation in how RSA encryption works, this should have been totally expected, but was an oversight on my part originally.
We're using 1024 bit RSA encryption, which means that the maximum data block size that can be encrypted is 1024/8 - 24 = 86 bytes, every 86 bytes needs to be "chopped up" and encrypted separately, so at 86 * 2 = 172, we're only encrypting two blocks, above that we're encrypting three, four, five, etc. By passing 172, our cipher text length grew so long the URL's are now too long.. I'm probably messing up the explanation a little here, but that's the general gist of it..
It seems we'll be looking at designing a better way for this to work, as it can be expected they'll want "more features" to be added in the future and thus our token will grow ever larger...
Assuming this is all logged in a database can you not pass the data back and forth using SSL web services. Then in the case of being able to quickly go from the desktop app to the web app make a rpc call to the website to generate a random key, pass that to the user and call a web page using that. Make the key valid for say 10 seconds meaning should a key be captured and broken it will have become invalid?
I have little experience with this kind of thing so I'm expecting many holes to be poked in the idea.