Python passlib generate one time secret code - fastapi

What is the easiest way to generate a one-time password (sms secret code with N lengths of symbols) with passlib?
How I'm creating it now:
from secrets import randbelow as secrets_randbelow
def create_secret_code() -> str: # TODO use OTP
secret_code = "".join([str(secrets_randbelow(exclusive_upper_bound=10)) for _ in range(config.SECRET_CODE_LEN)])
print_on_stage(secret_code=secret_code)
return secret_code
Obviously, it needs to check that generated code already not in a use (for example - making it via Redis).
I also already have an passlib object into my code to hashing and verifying passwords
from passlib.context import CryptContext
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
I found this class, but can't figure out how to just generate sms secret code with N lengths of symbols
P.S. I added a fastapi tag because I'm using an fastapi and passlib is used as standard cryptography tool for it, docs

You can initialize the TOTP class with the number of digits you want for the token, like this:
TOTP(digits=10)
Here's a complete example, using your config.SECRET_CODE_LEN:
from passlib.totp import TOTP
otp = TOTP('s3jdvb7qd2r7jpxx', digits=config.SECRET_CODE_LEN)
token = otp.generate()
print(token.token)

Related

Can I share an object between telegram commands of a bot?

I want to create an object when the user press /start in a Telegram bot, and then share this object among all the commands of the bot. Is this possible? As far as I understand, there's only one thread of your bot running in your server. However, I see that there is a context in the command functions. Can I pass this object as a kind of context? For example:
'''
This is a class object that I created to store data from the user and configure the texts I'll display depending on
the user language but maybe I fill it also with info about something it will buy in the bot
'''
import configuration
from telegram import Update, ForceReply
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackContext
# Commands of the bot
def start(update: Update, context: CallbackContext) -> None:
"""Send a message when the command /start is issued."""
s = configuration.conf(update) #Create the object I'm saying
update.message.reply_markdown_v2(s.text[s.lang_active],
reply_markup=ForceReply(selective=True),
)
def check(update: Update, context: CallbackContext) -> None:
"""Send a message when the command /start is issued."""
s = configuration.conf(update) # I want to avoid this!
update.message.reply_markdown_v2(s.text[s.lang_active],
reply_markup=ForceReply(selective=True),
)
... REST OF THE BOT
python-telegram-bot already comes with a built-in mechanism for storing data. You can do something like
try:
s = context.user_data['config']
except KeyError:
s = configuration.confi(update)
context.user_data['config'] = s
This doesn't have to be repeated in every callback - you can e.g.
use a TypeHandler in a low group to create the config if needed. then in all handlers in higher groups, you don't need to worry about it
use a custom implementation of CallbackContext that adds a property context.user_config
Disclaimer: I'm currently the maintainer of python-telegram-bot.

context.job_queue.run_once not working in Python Telegram BOT API

I'm trying to setup a bot which:
Receives the keywords in /search_msgs userkey command from a TG group
Search in DB for that userkey and send back appropriate text back
I'm getting two errors
None type object has no attribute args, in callback_search_msgs(context), see code snippet
AttributeError: 'int' object has no attribute 'job_queue', in search_msgs(update, context), see code snippet.
Telegram's official documents is way too difficult for me to read and understand. Couldn't find even one place where Updater, update, Commandhandler, context are all explained together with examples.
How to fix this code?
import telegram
from telegram.ext import Updater,CommandHandler, JobQueue
token = "Token"
bot = telegram.Bot(token=token)
# Search specific msgs on user request
def search_msgs(update, context):
context.job_queue.run_once(callback_search_msgs, context=update.message.chat_id)
def callback_search_msgs(context):
print('In TG, args', context.args)
chat_id = context.job.context
search_msgs(context, chat_id)
def main():
updater = Updater(token, use_context=True)
dp = updater.dispatcher
dp.add_handler(CommandHandler("search_msgs",search_msgs, pass_job_queue=True,
pass_user_data=True))
updater.start_polling()
updater.idle()
if __name__ == '__main__':
main()
Let me first try & clear something up:
Telegram's official documents is way too difficult for me to read and understand. Couldn't find even one place where Updater, update, Commandhandler, context are all explained together with examples.
I'm guessing that by "Telegram's official documents" you mean the docs at https://core.telegram.org/bots/api. However, Updater, CommandHandler and context are concepts of python-telegram-bot, which is one (of many) python libraries that provides a wrapper for the bot api. python-telegram-bot provides a tutorial, examples, a wiki where a lots of the features are explained and documentation.
Now to your code:
In context.job_queue.run_once(callback_search_msgs, context=update.message.chat_id) you're not telling job_queue when to run the the job. You must pass an integer or a datetime.(date)time object as second argument.
in
def callback_search_msgs(context):
print('In TG, args', context.args)
chat_id = context.job.context
search_msgs(context, chat_id)
you are passing context and chat_id to search_msgs. However, that function treats context as an instance of telegram.ext.CallbackContext, while you pass an integer instead. Also, even if that worked, this would just schedule another job in an infinite loop.
Finally, I don't understand what scheduling jobs has to do with looking up a key in a database. All you have to do for that is something like
def search_msgs(update, context):
userkey = context.args[0]
result = look_up_key_in_db(userkey)
# this assumes that result is a string:
update.effective_message.reply_text(result)
To understand context.args better, have a look at this wiki page.
Disclaimer: I'm currently the maintainer of python-telegram-bot.

How to make an Airflow DAG read from a google spread sheet using a stored connection

I'm trying to build up Airflow DAGs that read data from (or write data to) some Google spread sheets.
Among the connections in Airflow I've saved a connection of type "Google Cloud Platform" which includes project_id, scopes and on "Keyfile JSON", a dictionary with
"type","project_id","private_key_id","private_key","client_email","client_id",
"auth_uri","token_uri","auth_provider_x509_cert_url","client_x509_cert_url"
I can connect to the Google Spread Sheet using
cred_dict = ... same as what I saved in Keyfile JSON ...
creds = ServiceAccountCredentials.from_json_keyfile_dict(cred_dict,scope)
client = gspread.authorize(creds)
sheet = client.open(myfile).worksheet(mysheet) # works!
But I would prefer to not write explicitly the key in the code and, instead, import it from Airflow connections.
I'd like to know if there is a solution of the like of
from airflow.hooks.some_hook import get_the_keyfile
conn_id = my_saved_gcp_connection
cred_dict = get_the_keyfile(gcp_conn_id=conn_id)
creds = ServiceAccountCredentials.from_json_keyfile_dict(cred_dict,scope)
client = gspread.authorize(creds)
sheet = client.open(myfile).worksheet(mysheet)
I see there are several hooks to GCP connections https://airflow.apache.org/howto/connection/gcp.html but my little knowledge makes me failing in understanding which one to use and which function (if any) extract the keyfile from the saved connection.
Any suggestion would be greatly welcomed :)
Below is the code I'm using to connect to gspread sheets from Airflow using a stored connection.
import json
import gspread
from oauth2client.service_account import ServiceAccountCredentials
from airflow.contrib.hooks.gcp_api_base_hook import GoogleCloudBaseHook
def get_cred_dict(conn_id='my_google_connection'):
gcp_hook = GoogleCloudBaseHook(gcp_conn_id=conn_id)
return json.loads(gcp_hook._get_field('keyfile_dict'))
def get_client(conn_id='my_google_connection'):
cred_dict = get_cred_dict(conn_id)
creds = ServiceAccountCredentials.from_json_keyfile_dict(cred_dict, scope)
client = gspread.authorize(creds)
return client
def get_sheet(doc_name, sheet_name):
client = get_client()
sheet = client.open(doc_name).worksheet(sheet_name)
return sheet
With Airflow 2.5.1 (year 2023) the following code works too.
from airflow.providers.google.common.hooks.base_google import GoogleBaseHook
import gspread
# Create a hook object
# When using the google_cloud_default we can use
# hook = GoogleBaseHook()
# Or for a deligate use: GoogleBaseHook(delegate_to='foo#bar.com')
hook = GoogleBaseHook(gcp_conn_id='my_google_cloud_conn_id')
# Get the credentials
credentials = hook.get_credentials()
# Optional, set the delegate email if needed later.
# You need a domain wide delegate service account to use this.
credentials = credentials.with_subject('foo#bar.com')
# Use the credentials to authenticate the gspread client
gc = gspread.Client(auth=credentials)
# Create Spreadsheet
gc.create('Yabadabadoooooooo') # Optional use folder_id=
gc.list_spreadsheet_files()
Wxll
Resources:
gspread Client documentation
GoogleBaseHook documentation

How to increment Firestore field value in python?

shard_ref.update("count", firebase.firestore.FieldValue.increment(1));
I am looking for way to increment and update in python, I am not abe to even update it with a predefined value in python. the doc doesn't specify any python samples.
I am using firebase_admin sdk like this,
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
for more check docs https://firebase.google.com/docs/firestore/solutions/counters
Unfortunately (to me it just doesn't feel right), adding support for transforms in your codebase means you have to use google-cloud-firestore next to firebase_admin.
You can then use Transforms such as Increment, ArrayRemove, etc.
Sample code:
from google.cloud.firestore_v1 import Increment
# assuming shard_ref is a firestore document reference
shard_ref.update({'count': Increment(1)})
another option is use Firebase Realtime Database URL as a REST endpoint. ( All we need to do is append .json to the end of the URL and send a request from our favorite HTTPS client)
for your case, use conditional request. explanation and example described at https://firebase.google.com/docs/database/rest/save-data#section-conditional-requests

Parse response from Google Cloud Vision API Python Client

I am using Python Client for Google Cloud Vision API, basically same code as in documentation http://google-cloud-python.readthedocs.io/en/latest/vision/
>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> response = client.annotate_image({
... 'image': {'source': {'image_uri': 'gs://my-test-bucket/image.jpg'}},
... 'features': [{'type': vision.enums.Feature.Type.FACE_DETECTOIN}],
... })
problem is that response doesn't have field "annotations" (as it is documentation) but based on documentation has field for each "type". so when I try to get response.face_annotations I get
and basically I don't know how to extract result from Vision API from response (AnnotateImageResponse) to get something like json/dictionary like data.
version of google-cloud-vision is 0.25.1 and it was installed as full google-cloud library (pip install google-cloud).
I think today is not my day
I appreciate any clarification / help
Hm. It is a bit tricky, but the API is pretty great overall. You can actually directly call the face detection interface, and it'll spit back exactly what you want - a dictionary with all the info.
from google.cloud import vision
from google.cloud.vision import types
img = 'YOUR_IMAGE_URL'
client = vision.ImageAnnotatorClient()
image = vision.types.Image()
image.source.image_uri = img
faces = client.face_detection(image=image).face_annotations
print faces
Above answers wont help because delta in improvisation is happening which you can say reality vs theoretical.
The vision response is not json type, it is just the customized class type which is perfect for vision calls.
So after much research, I conjured this solution and it works
Here is the solution
Convert this output to ProtoBuff and then to json, it will be simple extraction.
def json_to_hash_dump(vision_response):
"""
a function defined to take a convert the
response from vision api to json object transformation via protoBuff
Args:
vision_response
Returns:
json_object
"""
from google.protobuf.json_format import MessageToJson
json_obj = MessageToJson((vision_response._pb))
# to dict items
r = json.loads(json_obj)
return r
well alternative is to use Python API Google client, example is here https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/vision/api/label/label.py

Resources