Parse response from Google Cloud Vision API Python Client - google-cloud-vision

I am using Python Client for Google Cloud Vision API, basically same code as in documentation http://google-cloud-python.readthedocs.io/en/latest/vision/
>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> response = client.annotate_image({
... 'image': {'source': {'image_uri': 'gs://my-test-bucket/image.jpg'}},
... 'features': [{'type': vision.enums.Feature.Type.FACE_DETECTOIN}],
... })
problem is that response doesn't have field "annotations" (as it is documentation) but based on documentation has field for each "type". so when I try to get response.face_annotations I get
and basically I don't know how to extract result from Vision API from response (AnnotateImageResponse) to get something like json/dictionary like data.
version of google-cloud-vision is 0.25.1 and it was installed as full google-cloud library (pip install google-cloud).
I think today is not my day
I appreciate any clarification / help

Hm. It is a bit tricky, but the API is pretty great overall. You can actually directly call the face detection interface, and it'll spit back exactly what you want - a dictionary with all the info.
from google.cloud import vision
from google.cloud.vision import types
img = 'YOUR_IMAGE_URL'
client = vision.ImageAnnotatorClient()
image = vision.types.Image()
image.source.image_uri = img
faces = client.face_detection(image=image).face_annotations
print faces

Above answers wont help because delta in improvisation is happening which you can say reality vs theoretical.
The vision response is not json type, it is just the customized class type which is perfect for vision calls.
So after much research, I conjured this solution and it works
Here is the solution
Convert this output to ProtoBuff and then to json, it will be simple extraction.
def json_to_hash_dump(vision_response):
"""
a function defined to take a convert the
response from vision api to json object transformation via protoBuff
Args:
vision_response
Returns:
json_object
"""
from google.protobuf.json_format import MessageToJson
json_obj = MessageToJson((vision_response._pb))
# to dict items
r = json.loads(json_obj)
return r

well alternative is to use Python API Google client, example is here https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/vision/api/label/label.py

Related

How to test HTTP endpoints in namek 2.14.1

I am beginner to Nameko framework. I have a very simple Serivce with one GET & POST endpoint, which works as expected when i run locally.
I am trying to create a test case for my Nameko service and I cant seem to find a documentation which explains clearly how I can go about it.
from nameko.web.handlers import http
import json
class SampleService:
name = "sample_service"
#http("GET", "/health")
def health(self, request):
return 200, json.dumps({'status': "healthy"})
#http("POST", "/create")
def create_user(self, request):
data = request.get_json(force=True)
print(data)
return 200, json_dumps({'status': 'created'})
The Best reference I had for testing this was https://github.com/nameko/nameko-examples/tree/master/gateway/test/interface and I am not entirely sure if this code is up todate and can be easily replicated.
Any help on this would be much appreciated.

Scrape BSCScan Token Holdings Page

I'm trying to get data from this page
https://bscscan.com/tokenholdings?a=0xFAe2dac0686f0e543704345aEBBe0AEcab4EDA3d
But the Website owner doesn't provide endpoints APIs for this purpose. So I tried to achieve it in different ways:
-USING DRYSCRAPE but the library seems to be abandoned;
-USING REQUESTS but the data are provided dinamically by javascript;
-USING REQUESTS HTML but even in this case the data doesn't seems to be loaded.
I would like to ignore selenium cause it's slow but I don't know how to solve this issue. Anyone has a solution that could work? The data I need is the table containing the tokens of the wallet. Thank U in advice and hv a nice day.
You can do it with requests-html, for example let's grab the symbol of the first row:
from requests_html import HTMLSession
session = HTMLSession()
url='https://bscscan.com/tokenholdings'
token={'a': '0xFAe2dac0686f0e543704345aEBBe0AEcab4EDA3d'}
r = session.get(url, params=token)
r.html.render(sleep=2)
binance_row = r.html.find('tbody tr', first=True)
symbol = binance_row.find('td')[2].text
print(symbol)
Output:
BNB

Python passlib generate one time secret code

What is the easiest way to generate a one-time password (sms secret code with N lengths of symbols) with passlib?
How I'm creating it now:
from secrets import randbelow as secrets_randbelow
def create_secret_code() -> str: # TODO use OTP
secret_code = "".join([str(secrets_randbelow(exclusive_upper_bound=10)) for _ in range(config.SECRET_CODE_LEN)])
print_on_stage(secret_code=secret_code)
return secret_code
Obviously, it needs to check that generated code already not in a use (for example - making it via Redis).
I also already have an passlib object into my code to hashing and verifying passwords
from passlib.context import CryptContext
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
I found this class, but can't figure out how to just generate sms secret code with N lengths of symbols
P.S. I added a fastapi tag because I'm using an fastapi and passlib is used as standard cryptography tool for it, docs
You can initialize the TOTP class with the number of digits you want for the token, like this:
TOTP(digits=10)
Here's a complete example, using your config.SECRET_CODE_LEN:
from passlib.totp import TOTP
otp = TOTP('s3jdvb7qd2r7jpxx', digits=config.SECRET_CODE_LEN)
token = otp.generate()
print(token.token)

context.job_queue.run_once not working in Python Telegram BOT API

I'm trying to setup a bot which:
Receives the keywords in /search_msgs userkey command from a TG group
Search in DB for that userkey and send back appropriate text back
I'm getting two errors
None type object has no attribute args, in callback_search_msgs(context), see code snippet
AttributeError: 'int' object has no attribute 'job_queue', in search_msgs(update, context), see code snippet.
Telegram's official documents is way too difficult for me to read and understand. Couldn't find even one place where Updater, update, Commandhandler, context are all explained together with examples.
How to fix this code?
import telegram
from telegram.ext import Updater,CommandHandler, JobQueue
token = "Token"
bot = telegram.Bot(token=token)
# Search specific msgs on user request
def search_msgs(update, context):
context.job_queue.run_once(callback_search_msgs, context=update.message.chat_id)
def callback_search_msgs(context):
print('In TG, args', context.args)
chat_id = context.job.context
search_msgs(context, chat_id)
def main():
updater = Updater(token, use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler("search_msgs",search_msgs, pass_job_queue=True,
pass_user_data=True))
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()
Let me first try & clear something up:
Telegram's official documents is way too difficult for me to read and understand. Couldn't find even one place where Updater, update, Commandhandler, context are all explained together with examples.
I'm guessing that by "Telegram's official documents" you mean the docs at https://core.telegram.org/bots/api. However, Updater, CommandHandler and context are concepts of python-telegram-bot, which is one (of many) python libraries that provides a wrapper for the bot api. python-telegram-bot provides a tutorial, examples, a wiki where a lots of the features are explained and documentation.
Now to your code:
In context.job_queue.run_once(callback_search_msgs, context=update.message.chat_id) you're not telling job_queue when to run the the job. You must pass an integer or a datetime.(date)time object as second argument.
in
def callback_search_msgs(context):
print('In TG, args', context.args)
chat_id = context.job.context
search_msgs(context, chat_id)
you are passing context and chat_id to search_msgs. However, that function treats context as an instance of telegram.ext.CallbackContext, while you pass an integer instead. Also, even if that worked, this would just schedule another job in an infinite loop.
Finally, I don't understand what scheduling jobs has to do with looking up a key in a database. All you have to do for that is something like
def search_msgs(update, context):
userkey = context.args[0]
result = look_up_key_in_db(userkey)
# this assumes that result is a string:
update.effective_message.reply_text(result)
To understand context.args better, have a look at this wiki page.
Disclaimer: I'm currently the maintainer of python-telegram-bot.

LocustIO: How to do batch request

I started to use LocustIO for load testing a 3rd party API which provides a way to do batch requests (http://docs.oasis-open.org/odata/odata/v4.01/odata-v4.01-part1-protocol.html#sec_BatchRequests).
How can this be done using LocustIO?
I tried with the following:
def batch(self):
response = self.client.request(method="POST", url="/$batch", auth=("ABC", "DEF"), headers={"ContentType": "multipart/mixed; boundary=batch_36522ad7-fc75-4b56-8c71-56071383e77b"}, data="Content-Type: application/http\nContent-Transfer-Encoding: binary\n\nGET putyoururlhere HTTP/1.1\nAccept: application/json\n\n\n")
Auth is something I need to have authentication to the API, but that's not the point of the question and "putyoururlhere" should be replaced with the actual url. Either way, it gives errors when executing the test, so I must be doing something wrong.
People with experience how to do this?
Kind regards!
The data parameter should be your POST body (only), you cant put additional headers in it the way you did. You probably just want to add them as additional entries in the dict you pass as headers
Se the documentation for python requests library for more details. https://requests.readthedocs.io/en/master/

Resources