How to increment Firestore field value in python? - firebase

shard_ref.update("count", firebase.firestore.FieldValue.increment(1));
I am looking for way to increment and update in python, I am not abe to even update it with a predefined value in python. the doc doesn't specify any python samples.
I am using firebase_admin sdk like this,
import firebase_admin
from firebase_admin import credentials
from firebase_admin import firestore
for more check docs https://firebase.google.com/docs/firestore/solutions/counters

Unfortunately (to me it just doesn't feel right), adding support for transforms in your codebase means you have to use google-cloud-firestore next to firebase_admin.
You can then use Transforms such as Increment, ArrayRemove, etc.
Sample code:
from google.cloud.firestore_v1 import Increment
# assuming shard_ref is a firestore document reference
shard_ref.update({'count': Increment(1)})

another option is use Firebase Realtime Database URL as a REST endpoint. ( All we need to do is append .json to the end of the URL and send a request from our favorite HTTPS client)
for your case, use conditional request. explanation and example described at https://firebase.google.com/docs/database/rest/save-data#section-conditional-requests

Related

Python passlib generate one time secret code

What is the easiest way to generate a one-time password (sms secret code with N lengths of symbols) with passlib?
How I'm creating it now:
from secrets import randbelow as secrets_randbelow
def create_secret_code() -> str: # TODO use OTP
secret_code = "".join([str(secrets_randbelow(exclusive_upper_bound=10)) for _ in range(config.SECRET_CODE_LEN)])
print_on_stage(secret_code=secret_code)
return secret_code
Obviously, it needs to check that generated code already not in a use (for example - making it via Redis).
I also already have an passlib object into my code to hashing and verifying passwords
from passlib.context import CryptContext
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
I found this class, but can't figure out how to just generate sms secret code with N lengths of symbols
P.S. I added a fastapi tag because I'm using an fastapi and passlib is used as standard cryptography tool for it, docs
You can initialize the TOTP class with the number of digits you want for the token, like this:
TOTP(digits=10)
Here's a complete example, using your config.SECRET_CODE_LEN:
from passlib.totp import TOTP
otp = TOTP('s3jdvb7qd2r7jpxx', digits=config.SECRET_CODE_LEN)
token = otp.generate()
print(token.token)

How to make an Airflow DAG read from a google spread sheet using a stored connection

I'm trying to build up Airflow DAGs that read data from (or write data to) some Google spread sheets.
Among the connections in Airflow I've saved a connection of type "Google Cloud Platform" which includes project_id, scopes and on "Keyfile JSON", a dictionary with
"type","project_id","private_key_id","private_key","client_email","client_id",
"auth_uri","token_uri","auth_provider_x509_cert_url","client_x509_cert_url"
I can connect to the Google Spread Sheet using
cred_dict = ... same as what I saved in Keyfile JSON ...
creds = ServiceAccountCredentials.from_json_keyfile_dict(cred_dict,scope)
client = gspread.authorize(creds)
sheet = client.open(myfile).worksheet(mysheet) # works!
But I would prefer to not write explicitly the key in the code and, instead, import it from Airflow connections.
I'd like to know if there is a solution of the like of
from airflow.hooks.some_hook import get_the_keyfile
conn_id = my_saved_gcp_connection
cred_dict = get_the_keyfile(gcp_conn_id=conn_id)
creds = ServiceAccountCredentials.from_json_keyfile_dict(cred_dict,scope)
client = gspread.authorize(creds)
sheet = client.open(myfile).worksheet(mysheet)
I see there are several hooks to GCP connections https://airflow.apache.org/howto/connection/gcp.html but my little knowledge makes me failing in understanding which one to use and which function (if any) extract the keyfile from the saved connection.
Any suggestion would be greatly welcomed :)
Below is the code I'm using to connect to gspread sheets from Airflow using a stored connection.
import json
import gspread
from oauth2client.service_account import ServiceAccountCredentials
from airflow.contrib.hooks.gcp_api_base_hook import GoogleCloudBaseHook
def get_cred_dict(conn_id='my_google_connection'):
gcp_hook = GoogleCloudBaseHook(gcp_conn_id=conn_id)
return json.loads(gcp_hook._get_field('keyfile_dict'))
def get_client(conn_id='my_google_connection'):
cred_dict = get_cred_dict(conn_id)
creds = ServiceAccountCredentials.from_json_keyfile_dict(cred_dict, scope)
client = gspread.authorize(creds)
return client
def get_sheet(doc_name, sheet_name):
client = get_client()
sheet = client.open(doc_name).worksheet(sheet_name)
return sheet
With Airflow 2.5.1 (year 2023) the following code works too.
from airflow.providers.google.common.hooks.base_google import GoogleBaseHook
import gspread
# Create a hook object
# When using the google_cloud_default we can use
# hook = GoogleBaseHook()
# Or for a deligate use: GoogleBaseHook(delegate_to='foo#bar.com')
hook = GoogleBaseHook(gcp_conn_id='my_google_cloud_conn_id')
# Get the credentials
credentials = hook.get_credentials()
# Optional, set the delegate email if needed later.
# You need a domain wide delegate service account to use this.
credentials = credentials.with_subject('foo#bar.com')
# Use the credentials to authenticate the gspread client
gc = gspread.Client(auth=credentials)
# Create Spreadsheet
gc.create('Yabadabadoooooooo') # Optional use folder_id=
gc.list_spreadsheet_files()
Wxll
Resources:
gspread Client documentation
GoogleBaseHook documentation

Airflow Custom Metrics and/or Result Object with custom fields

While running pySpark SQL pipelines via Airflow I am interested in getting out some business stats like:
source read count
target write count
sizes of DFs during processing
error records count
One idea is to push it directly to the metrics, so it will gets automatically consumed by monitoring tools like Prometheus. Another idea is to obtain these values via some DAG result object, but I wasn't able to find anything about it in docs.
Please post some at least pseudo code if you have solution.
I would look to reuse Airflow's statistics and monitoring support in the airflow.stats.Stats class. Maybe something like this:
import logging
from airflow.stats import Stats
PYSPARK_LOG_PREFIX = "airflow_pyspark"
def your_python_operator(**context):
[...]
try:
Stats.incr(f"{PYSPARK_LOG_PREFIX}_read_count", src_read_count)
Stats.incr(f"{PYSPARK_LOG_PREFIX}_write_count", tgt_write_count)
# So on and so forth
except:
logging.exception("Caught exception during statistics logging")
[...]

Can not produce angular 6 timestamp, missing imports

I am trying to produce a timestamp on click (of a button) and another timestamp on click of another button. Essentially a punch clock. I understand how to create the buttons, but have no clue with packages to import that would produce the timestamp ....yes, I've search the entire web for this.
Ultimately I'd like to register that to firebase db for a each registered user, but we'll get to that later.
import * as firebase from 'firebase'
user.createdAt = firebase.firestore.FieldValue.serverTimestamp()
user.updatedAt = firebase.firestore.FieldValue.serverTimestamp()
From the docs: https://firebase.google.com/docs/reference/js/firebase.firestore.FieldValue#.serverTimestamp

Parse response from Google Cloud Vision API Python Client

I am using Python Client for Google Cloud Vision API, basically same code as in documentation http://google-cloud-python.readthedocs.io/en/latest/vision/
>>> from google.cloud import vision
>>> client = vision.ImageAnnotatorClient()
>>> response = client.annotate_image({
... 'image': {'source': {'image_uri': 'gs://my-test-bucket/image.jpg'}},
... 'features': [{'type': vision.enums.Feature.Type.FACE_DETECTOIN}],
... })
problem is that response doesn't have field "annotations" (as it is documentation) but based on documentation has field for each "type". so when I try to get response.face_annotations I get
and basically I don't know how to extract result from Vision API from response (AnnotateImageResponse) to get something like json/dictionary like data.
version of google-cloud-vision is 0.25.1 and it was installed as full google-cloud library (pip install google-cloud).
I think today is not my day
I appreciate any clarification / help
Hm. It is a bit tricky, but the API is pretty great overall. You can actually directly call the face detection interface, and it'll spit back exactly what you want - a dictionary with all the info.
from google.cloud import vision
from google.cloud.vision import types
img = 'YOUR_IMAGE_URL'
client = vision.ImageAnnotatorClient()
image = vision.types.Image()
image.source.image_uri = img
faces = client.face_detection(image=image).face_annotations
print faces
Above answers wont help because delta in improvisation is happening which you can say reality vs theoretical.
The vision response is not json type, it is just the customized class type which is perfect for vision calls.
So after much research, I conjured this solution and it works
Here is the solution
Convert this output to ProtoBuff and then to json, it will be simple extraction.
def json_to_hash_dump(vision_response):
"""
a function defined to take a convert the
response from vision api to json object transformation via protoBuff
Args:
vision_response
Returns:
json_object
"""
from google.protobuf.json_format import MessageToJson
json_obj = MessageToJson((vision_response._pb))
# to dict items
r = json.loads(json_obj)
return r
well alternative is to use Python API Google client, example is here https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/vision/api/label/label.py

Resources