The October2021/January2022 Vision API produces significant changes from the previous version running up to September2021.
The October2021/January 2022 version's return JSON has less information compared to the previous September2021 version. These JSON file statistic (file size KB, # JSON lines) show the differences. The same image PNG file was run:
September2021 (more data)
File 1 1,666 KB / 114,856 lines
File 2 675 KB / 45,687
Using Features parameter with and without: "model": "builtin/legacy" produces the same result.
October2021/January2022 (less data)
File 1 1.435 KB / 102.275 lines
File 2 584 KB / 41,778
Release notes link: https://cloud.google.com/vision/docs/release-notes
Google Vision is a core function of my application. The upgrade has "broken" my application.
# Python application calls Vision API
Python version: 3.8.5
google-api-core==1.22.4
google-api-python-client==1.12.3
google-auth==1.28.1
google-auth-httplib2==0.0.4
google-cloud-core==1.6.0
google-cloud-documentai==0.4.0
google-cloud-storage==1.37.1
google-cloud-vision==2.0.0
google-crc32c==1.1.2
google-resumable-media==1.2.0
googleapis-common-protos==1.52.0
How can I access the previous version?
Related
We're currently using Firebase Functions with the node 14 public preview.
We need to increase the memory of a function past 1Gb.
The documentation of Google Cloud functions specifies that the max_old_space_size must be set for newest runtime, and the documentation shows :
gcloud functions deploy envVarMemory \
--runtime nodejs12 \
--set-env-vars NODE_OPTIONS="--max_old_space_size=8Gi" \
--memory 8Gi \
--trigger-http
However, the set-env-vars options does not exist in the firebase deploy
Using
firebase deploy --only functions:myFunction --set-env-vars NODE_OPTIONS="--max_old_space_size=4Gi" Yields the error: unknown option '--set-env-vars' error.
While deploying a heavy function, i logically get a heap out of memory error :
[1:0x29c51e07b7a0] 120101 ms: Mark-sweep (reduce) 1017.1 (1028.5) -> 1016.2 (1028.7) MB, 928.7 / 0.1 ms (average mu = 0.207, current mu = 0.209) allocation failure scavenge might not succeed
[1:0x29c51e07b7a0] 119169 ms: Scavenge (reduce) 1016.9 (1025.2) -> 1016.2 (1026.5) MB, 3.6 / 0.0 ms (average mu = 0.205, current mu = 0.191) allocation failure
And we can see the function only has 1028Mb of ram, not 4.
We did ask it to deploy with 4Gb:
functions
.runWith({ memory: '4GB', timeoutSeconds: 300 ,})
What is the key here?
We had exactly the same issue. It seems to happen when deploying function with node 12 or more.
Here is the solution to solve this:
Find your function on GCP web app
Click on "edit"
Scroll down and find "Runtime environment variables"
Add key NODE_OPTIONS with value : --max_old_space_size=4096
Here is a picture of the setting :
This is really annoying and I did not find any solution to set this setting while deploying in command line.
Since 2 days ago, we have been getting "unknown blob" errors when pulling from jfrog. I am attaching a sample log:
Command ['ssh', '-o', 'StrictHostKeyChecking=no', '-o', 'LogLevel=ERROR', 'localhost', 'docker', 'pull', '<redacted>.jfrog.io/<redacted>:latest'] failed with exit code 1 and output 'latest: Pulling from <redacted>
f5d23c7fed46: Pulling fs layer
3f4aa1d1dde5: Pulling fs layer
52c4bf0b6229: Pulling fs layer
fe61f8f5a308: Pulling fs layer
ebeed9e8b27e: Pulling fs layer
89831686aa31: Pulling fs layer
2e2c5baec652: Pulling fs layer
b6fa760c79e4: Pulling fs layer
2e2c5baec652: Waiting
ebeed9e8b27e: Waiting
b6fa760c79e4: Waiting
fe61f8f5a308: Waiting
3f4aa1d1dde5: Verifying Checksum
3f4aa1d1dde5: Download complete
f5d23c7fed46: Verifying Checksum
f5d23c7fed46: Download complete
fe61f8f5a308: Download complete
ebeed9e8b27e: Download complete
89831686aa31: Download complete
f5d23c7fed46: Pull complete
3f4aa1d1dde5: Pull complete
2e2c5baec652: Verifying Checksum
2e2c5baec652: Download complete
b6fa760c79e4: Downloading
unknown blob
This seems to have started during the kinesis outage. We first noticed it while we were trying to deploy a workaround during the outage. However the problem still persists.
The image pulls fine from docker hub, so it's not corrupted. This is currently breaking out automated deploy/provisioning process, as we have manually pull failed imaged from dockerhub.
Thanks,
-Caius
With #John's suggestion, I zapped the cache on JFrog side, and that removed the issue.
It seems that it's stale/invalid cache issue.
Also, while looking at the JFrog logs, I did find this, which might be relavant:
2020-11-28T18:55:24.493Z [jfrt ] [ERROR] [b66d3ae308977fb1] [o.a.r.RemoteRepoBase:858 ] [ttp-nio-8081-exec-17] - IO error while trying to download resource '<redacted>: org.artifactory.request.RemoteRequestException: Error fetching <redacted>/blobs/sha256:9c11dabbdc3a450cd1d9e15b016d455250606d78eecb33c92eebfa657549787f (remote response: 429: Too Many Requests)
TL;DR: zapping the cache fixed the problem.
As outlined in http://wiki.bitplan.com/index.php/Apache_Jena#Script_to_start_Fuseki_server
I have been avoiding the complexity of Fuseki configuration files and started the server from a script for my usecases in which I only need one dataset/endpoint. For multiple datasets/endpoints i simply used multiple servers.
Descriptions like:
https://jena.apache.org/documentation/fuseki2/fuseki-config-endpoint.html
and questions like:
fuseki Multiple services found exception
have been intimidating me since there seem to be so many options and no straight forward way to simply say: please use these dataset from the following directories as the command line version can do for one dataset.
Just look at:
https://users.jena.apache.narkive.com/MNZHLT25/multiple-datasets-on-fuseki
where the user expectation:
java -jar fuseki-0.1.0-server.jar --update --loc=data /dataset
--loc=data2 /dataset2
can be seen that is unfortunately not fullfilled. Instead:
http://jena.apache.org/documentation/serving_data/index.html#fuseki-configuration-file
was the answer at the time which is now an outdated link.
So obviously there are people out there getting fuseki to work with multiple datasets. But how do they do it ?
I know how to load a TDB store from a triple file via command line. I know that i could use the WebGUI to setup datasets and load data but that won't work for my multi million (and partly multi-billion) triple files.
What is a (hopefully simple) example for loading multiple triple files and making the result available with the same fuseki server as different datasets and have the SPARQL endpoints running (partly read-only?)
https://jena.apache.org/documentation/fuseki2/fuseki-layout.html gives a hint on the layout of files.
Using the script to start fuseki i inspected the run directory which in my case was to be found at:
apache-jena-fuseki-3.16.0/run
There are two subdirectories which are initially empty and stay so if you run things from the commandline:
configuration
database
By adding a dataset via the webgui http://localhost:3030
a directory with the name of the dataset in this case:
databases/cr
and a configuration file
configuration/cr.ttl
is created.
For smaller datasets data can now be added via the webgui. For bigger datasets a copy or symlink of the original loaded tdb data is necessary in the databases directory.
example symlinks:
zeus:databases wf$ls -l
total 48
drwxr-xr-x 4 wf admin 136 Sep 14 07:43 cr
lrwxr-xr-x 1 wf admin 27 Sep 15 11:53 dblp -> /Volumes/Torterra/dblp/data
lrwxr-xr-x 1 wf admin 26 Sep 14 08:10 gnd -> /Volumes/Torterra/gnd/data
lrwxr-xr-x 1 wf admin 42 Sep 14 07:55 wikidata -> /Volumes/Torterra/wikidata2020-08-15/data/
By restarting the server without a --loc
nohup java -jar fuseki-server.jar&
the configurations are automatically picked up.
The good news is that you do not have to bother with the details of the config files this way as long as you do not have any special needs.
I have an application URL. I need to run login test using Jmeter. I recorded the login steps using blazemeter extension of chrome. But when I run it I get below error. I know there have been questions like this, I have tried few and it seems my case is different.
I have tried:
Added these two lines in jmeter.bat
set JAVA_HOME=C:\Program Files\Java\jdk1.8.0_65
set PATH=%JAVA_HOME%\bin;%PATH%
Run Jmeter using "Run as Administrator"
Download the certificate from here https://gist.github.com/borisguery/9ef114c53b83e553b635 and install it this way
https://www.youtube.com/watch?v=2k581jcWk9M
Restart the Jmeter but and try again but no luck.
When I expand the error in Jmeter View tree listener I get error on this particular css file: https://abcurl.xyzsample.com/assets/loginpage/css/okta-sign-in.min.7c7cfd15fa939095d61912dd8000a2a8.css
Error:
Thread Name: Thread Group 1-1
Load time: 268
Connect Time: 0
Latency: 0
Size in bytes: 2256
Headers size in bytes: 0
Body size in bytes: 2256
Sample Count: 1
Error Count: 1
Response code: Non HTTP response code: javax.net.ssl.SSLHandshakeException
Response message: Non HTTP response message: Received fatal alert: handshake_failure
Response headers:
HTTPSampleResult fields:
ContentType:
DataEncoding: null
If you are getting error for only one .css file and it does not belong to the application under test (i.e. it is an external stylesheet) the best thing you could do is just to exclude it from the load test via URLs must match section which lives under "Advanced" tab of the HTTP Request Defaults configuration element.
If you need to load this .css by any means you could also try the following approaches:
Play with https.default.protocol and https.socket.protocols properties (look for the above lines in jmeter.properties) file
Install Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy Files into /jre/lib/security folder of your JRE or JDK home (replace existing files with the downloaded ones)
If your url needs a client certificate, then copy your cert to /bin folder and from the jmeter console if you go to options -> SSL Manager and select your cert , it would prompt you for the certificate password . And if you run your tests again , that should work .
Additionally you can also do keystore configuraion (http://jmeter.apache.org/usermanual/component_reference.html#Keystore_Configuration) , if you haven't done already .
Please Note that my jmeter version is 4.0 . Hope this helps .
I have a corrupted Lucene index. If I run "CheckIndex -fix" the problem is resolved, but as soon as I restart tomcat it becomes corrupted again.
The index directory is shared between two application servers running Liferay-Tomcat. I am fixing the index on 1 server and restarting that whilst the other is running. This is a production environment so I cannot bring them both down.
Any suggestions please?
Before fix, CheckIndex says:
Opening index # /usr/local/tomcat/liferay/lucene/0
Segments file=segments_5yk numSegments=1 version=FORMAT_SINGLE_NORM_FILE [Lucene 2.2]
1 of 1: name=_2vg docCount=31
compound=false
hasProx=true
numFiles=8
size (MB)=0.016
no deletions
test: open reader.........FAILED
WARNING: fixIndex() would remove reference to this segment; full exception:
java.io.IOException: read past EOF
at org.apache.lucene.store.BufferedIndexInput.refill(BufferedIndexInput.java:151)
at org.apache.lucene.store.BufferedIndexInput.readByte(BufferedIndexInput.java:38)
at org.apache.lucene.store.IndexInput.readVInt(IndexInput.java:78)
at org.apache.lucene.index.FieldInfos.read(FieldInfos.java:335)
at org.apache.lucene.index.FieldInfos.<init>(FieldInfos.java:71)
at org.apache.lucene.index.SegmentReader$CoreReaders.<init>(SegmentReader.java:119)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:652)
at org.apache.lucene.index.SegmentReader.get(SegmentReader.java:605)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:491)
at org.apache.lucene.index.CheckIndex.main(CheckIndex.java:903)
WARNING: 1 broken segments (containing 31 documents) detected
WARNING: would write new segments file, and 31 documents would be lost, if -fix were specified
If you access your search index with more than one application server, I would suggest integrating a Solr Server. So you don't have the problem that 2 app servers are trying to write on the same file. This could be error prone as you already found out.
To get Solr up and running you have to follow those steps:
Install a Solr Server on any machine you like. A machine running only Solr would be quite preferable.
Install the Solr search portlet in Liferay
Adjust the config files according to the setup document of Sol Search portlet.
Here are some additional links:
http://www.liferay.com/de/marketplace/-/mp/application/15193648
http://www.liferay.com/de/community/wiki/-/wiki/Main/Pluggable+Enterprise+Search+with+Solr