telethon python mark messages as read - telegram

I have the following code to get messages from group:
getmessage = client.get_messages(dialog, limit=1000)
for message in getmessage:
try:
if message.media == None:
print("message")
continue
else:
print("Media**********")
client.download_media(message)
I'm running this code each 1 min, I received same messages each running, How can I get only new messages (from last running)?

Related

Got Future <Future pending> attached to a different loop when using eventhub aio in Fastapi python

I am using python 3.9, azure-eventhub 5.10.1, azure-eventhub-checkpointstoreblob-aio, and I have following code that throws the exception regularly (we also have lots of successful case that sends the message with no error), but also got the runtime errors in the logs. Wondering what i did wrong here. Thanks
async def send_to_eventhub(self, producer, event_list, timestamp_event_received):
try:
async with producer:
event_data_batch = await producer.create_batch()
for (occupancy_status, hardware_id) in event_list:
# set message properties for space report
message_body = {
...
}
message = EventData(json.dumps(message_body))
message.properties = {
...
}
# Send message to the eventhub
logger.info("Sending message %s, %s", message, message.properties)
event_data_batch.add(message)
await producer.send_batch(event_data_batch)
logger.info(
"Message successfully sent %s, %s", message, message.properties
)
except (
EventDataError,
EventDataSendError,
OperationTimeoutError,
OwnershipLostError,
RuntimeError,
) as event_ex:
logger.error(
"eventhub Sending Error: Error ocurred\
sending message for hardware id %s %s %s",
hardware_id,
event_ex,
traceback.format_exc(),
) ```
And this function got called in the follow Fastapi
<!-- begin snippet: -->
#app.post(...)
async def handle_report(
...
):
...
try:
if len(incoming_data) > 0:
event_list = []
for sensor_data in incoming_data:
data = sensor_data["data"]
occupancy_status = json.loads(data)["value"]
hardware_id = sensor_data["properties"]["propertyList"][0]["value"]
event_list.append((occupancy_status, hardware_id))
await eventhub_helper.send_to_eventhub(
producer, event_list, received_timestamp
)
...`
<!-- end snippet -->
And the exception says:
`eventhub Sending Error: Error ocurred sending message for hardware id TSPR04ESH11000268 Task <Task pending name='Task-544711411' coro=<RequestResponseCycle.run_asgi() running at /opt/pysetup/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py:375> cb=[set.discard()]> got Future <Future pending> attached to a different loop Traceback (most recent call last):
File "/app/eventhub_helper.py", line 94, in send_to_eventhub
logger.info(
File "/opt/pysetup/.venv/lib/python3.9/site-packages/azure/eventhub/aio/_producer_client_async.py", line 218, in __aexit__
await self.close()
File "/opt/pysetup/.venv/lib/python3.9/site-packages/azure/eventhub/aio/_producer_client_async.py", line 811, in close
async with self._lock:
File "/usr/local/lib/python3.9/asyncio/locks.py", line 14, in __aenter__
await self.acquire()
File "/usr/local/lib/python3.9/asyncio/locks.py", line 120, in acquire
await fut
RuntimeError: Task <Task pending name='Task-544711411' coro=<RequestResponseCycle.run_asgi() running at /opt/pysetup/.venv/lib/python3.9/site-packages/uvicorn/protocols/http/httptools_impl.py:375> cb=[set.discard()]> got Future <Future pending> attached to a different loop`
I tried to reproduce this error, but it was hard because it went through with no error. Wondering if I did not consider concurrency enough. Did notice that "event_data_batch.add(message)" can cause error if that batch if full, but dont think it could cause runtime error and i know that message we sent is small

Airflow Error callback "on_failure_callback" is not executing all the lines in the function

I have a problem in the usage of the on_failure_callback. I have defined my error callback function to perform 2 "http post" requests and I have added a logging.error( ) message between the two. I notice that only one is getting executed. Is there any delay or some thing that I am missing here?
please help.
def custom_failure_function(context):
logging.error("These task instances ahhh")
to_json= json.loads(t_teams)
var1= json.dumps(to_json)
print(var1)
r = requests.post('https://myteamschannel/teams', data=var1,verify=False)
logging.error("hello")
runID='OPERATION_CONTEXT .OCV8.TEST2 alarm_object 193'
headers = {'Content-Type':'text/xml'}
alarmRequest='<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:oper=\"http://172.19.146.147:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object\"><soapenv:Header xmlns:wsa=\"http://www.w3.org/2005/08/addressing\"><wsu:Timestamp xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\"><wsu:Created>2014-05-22T11:57:38.267Z</wsu:Created><wsu:Expires>2014-05-22T12:02:38.000Z</wsu:Expires></wsu:Timestamp><wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" soapenv:mustUnderstand=\"1\"><wsse:UsernameToken xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\"><wsse:Username>girws</wsse:Username><wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">Temip</wsse:Password></wsse:UsernameToken></wsse:Security></soapenv:Header> <soapenv:Body> <oper:Set_Request xmlns:oper=\"http://172.19.146.147:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object\"><EntitySpec><Natural> '+ runID + '</Natural></EntitySpec><Arguments> <Attribute_Values><Filtering_Type>' + 'AUTOFAIL' + '</Filtering_Type></Attribute_Values></Arguments></oper:Set_Request> </soapenv:Body> </soapenv:Envelope>'
r = requests.post('http://myerrorappli:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object', header=headers, data=alarmRequest,verify=True)
logging.error ("FAILED TASK")
logging.error("============================================")
The logs of my airflow are below. Its stopping at the "hello" message and not printing "FAILED TASK".
*** Reading local file: /data/airflow//logs/MOE_TEST_DAG/TeamsTest/2021-10-02T08:24:14.821970+00:00/3.log
[2021-10-02 10:24:36,535] {MOE_TEST.py:132} ERROR - These task instances ahhh
[2021-10-02 10:24:36,987] {MOE_TEST.py:138} ERROR - hello
From your description it's more likely that there is an issue with requests.post() try to add timeout to the request:
def custom_failure_function(context):
...
try:
r = requests.post('http://myerrorappli:7180/TeMIP_WS/services/OPERATION_CONTEXT-alarm_object', header=headers,
data=alarmRequest, verify=True, timeout=5)
except requests.Timeout:
logging.error("request timeout")
except requests.ConnectionError:
logging.error("request connection error")
logging.error("FAILED TASK")
logging.error("============================================")

Emitting dronekit.io vehicle's attribute changes using flask-socket.io

I'm trying to send data from my dronekit.io vehicle using flask-socket.io. Unfortunately, I got this log:
Starting copter simulator (SITL)
SITL already Downloaded and Extracted.
Ready to boot.
Connecting to vehicle on: tcp:127.0.0.1:5760
>>> APM:Copter V3.3 (d6053245)
>>> Frame: QUAD
>>> Calibrating barometer
>>> Initialising APM...
>>> barometer calibration complete
>>> GROUND START
* Restarting with stat
latitude -35.363261
>>> Exception in attribute handler for location.global_relative_frame
>>> Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
longitude 149.1652299
>>> Exception in attribute handler for location.global_relative_frame
>>> Working outside of request context.
This typically means that you attempted to use functionality that needed
an active HTTP request. Consult the documentation on testing for
information about how to avoid this problem.
Here is my code:
sample.py
from dronekit import connect, VehicleMode
from flask import Flask
from flask_socketio import SocketIO, emit
import dronekit_sitl
import time
sitl = dronekit_sitl.start_default()
connection_string = sitl.connection_string()
print("Connecting to vehicle on: %s" % (connection_string,))
vehicle = connect(connection_string, wait_ready=True)
def arm_and_takeoff(aTargetAltitude):
print "Basic pre-arm checks"
while not vehicle.is_armable:
print " Waiting for vehicle to initialise..."
time.sleep(1)
print "Arming motors"
vehicle.mode = VehicleMode("GUIDED")
vehicle.armed = True
while not vehicle.armed:
print " Waiting for arming..."
time.sleep(1)
print "Taking off!"
vehicle.simple_takeoff(aTargetAltitude)
while True:
if vehicle.location.global_relative_frame.alt>=aTargetAltitude*0.95:
print "Reached target altitude"
break
time.sleep(1)
last_latitude = 0.0
last_longitude = 0.0
last_altitude = 0.0
#vehicle.on_attribute('location.global_relative_frame')
def location_callback(self, attr_name, value):
global last_latitude
global last_longitude
global last_altitude
if round(value.lat, 6) != round(last_latitude, 6):
last_latitude = value.lat
print "latitude ", value.lat, "\n"
emit("latitude", value.lat)
if round(value.lon, 6) != round(last_longitude, 6):
last_longitude = value.lon
print "longitude ", value.lon, "\n"
emit("longitude", value.lon)
if round(value.alt) != round(last_altitude):
last_altitude = value.alt
print "altitude ", value.alt, "\n"
emit("altitude", value.alt)
app = Flask(__name__)
socketio = SocketIO(app)
if __name__ == '__main__':
socketio.run(app, host='0.0.0.0', port=5000, debug=True)
arm_and_takeoff(20)
I know because of the logs that I should not do any HTTP request inside "vehicle.on_attribute" decorator method and I should search for information on how to solve this problem but I didn't found any info about the error.
Hope you could help me.
Thank you very much,
Raniel
The emit() function by default returns an event back to the active client. If you call this function outside of a request context there is no concept of active client, so you get this error.
You have a couple of options:
indicate the recipient of the event and the namespace that you are using, so that there is no need to look them up in the context. You can do this by adding room and namespace arguments. Use '/' for the namespace if you are using the default namespace.
emit to all clients by adding broadcast=True as an argument, plus the namespace as indicated in #1.

How can I access the last message which is sent to a processor in MPI?

I am using a master-worker structure using Message Passing Interface (MPI) but whenever I call Receive function, instead of receiving the messages in the order of sending sequence, I need to receive the last message sent from the master to the each processor and ignore the previous ones!
My question is that if there is any way that we can access each processor's buffer and pick the last message in the queue?
No, you can't just peer into the queue; but you can test to see if more messages are present with MPI_Probe or MPI_Iprobe, and while there are more messages present, keep receiving and discarding the old data:
#!/usr/bin/env python
from mpi4py import MPI
import time
def waiter(comm, sendTask):
# wait for messages to be present
while not comm.Iprobe(source=sendTask, tag=1):
time.sleep(1)
# read all messages while more are available, discarding old
while comm.Iprobe(source=sendTask, tag=1):
lastMsg = comm.recv(source=sendTask, tag=1)
if lastMsg is None:
print "No messages pending"
else:
print "Last message was ", lastMsg
comm.Barrier()
def sender(comm, waitTask):
for msgno in range(5):
print "sending: ", msgno
comm.send(msgno, dest=waitTask, tag=1)
print "sending: ", -1
comm.send(-1, dest=waitTask, tag=1)
comm.Barrier()
if __name__== "__main__":
comm = MPI.COMM_WORLD
sendTask = 1
waitTask = 0
if comm.rank == waitTask:
waiter(comm, sendTask)
elif comm.rank == sendTask:
sender(comm, waitTask)
else:
comm.Barrier()
Running gives
$ mpirun -np 2 ./readall.py
sending: 0
sending: 1
sending: 2
sending: 3
sending: 4
sending: -1
Last message was -1

Python asynchronous processing in existing loop

I'm creating a module for OpenERP in which I have to launch an ongoing process.
OpenERP runs in a continuous loop. My process has to be launched when I click on a button, and it has to keep running without holding up OpenERP's execution.
To simplify it, I have this code:
#!/usr/bin/python
import multiprocessing
import time
def f(name):
while True:
try:
print 'hello', name
time.sleep(1)
except KeyboardInterrupt:
return
if __name__ == "__main__":
count = 0
while True:
count += 1
print "Pass %d" % count
pool = multiprocessing.Pool(1)
result = pool.apply_async(f, args=['bob'])
try:
result.get()
except KeyboardInterrupt:
#pass
print 'Interrupted'
time.sleep(1)
When executed, Pass 1 is printed once and then an endless series of hello bob is printed until CTRL+C is pressed. Then Pass 2 is obtained and so on, as shown below:
Pass 1
hello bob
hello bob
hello bob
^CInterrupted
Pass 2
hello bob
hello bob
hello bob
hello bob
I would like the passes to keep increasing in parallel with the hello bob's.
How do I do that?
Here what you can do id you can create then Multi Threaded Implementation of Python under the server memory, which will run independently then server execution thread.
Trick behind this will be used is we will fork one thread from server on your required click and we will assign all server variable separate copy to the new Thread so that thread will execute independently and at then end of process you have to commit the transaction as this process will be not main server process. Here the small example of it how you can do it .
import pprint
import pooler
from threading import Thread
import datetime
import logging
pp = pprint.PrettyPrinter(indent=4)
class myThread(Thread):
"""
"""
def __init__(self, obj, cr, uid, context=None):
Thread.__init__(self)
self.external_id_field = 'id'
self.obj = obj
self.cr = cr
self.uid = uid
self.context = context or {}
self.logger = logging.getLogger(module_name)
self.initialize()
"""
Abstract Method to be implemented in the real instance
"""
def initialize(self):
"""
init before import
usually for the login
"""
pass
def init_run(self):
"""
call after intialize run in the thread, not in the main process
TO use for long initialization operation
"""
pass
def run(self):
"""
this is the Entry point to launch the process(Thread)
"""
try:
self.init_run()
#Your Code Goes Here
#TODO Add Business Logic
self.cr.commit()
except Exception, err:
sh = StringIO.StringIO()
traceback.print_exc(file=sh)
error = sh.getvalue()
print error
self.cr.close()
LIke this you can add some code in some module like (import_base module in 6.1 or trunk)
Now what Next you can do is you can make extended implementation of this and then make instace of the service or you can directly start forking the treads like following code:
service = myServcie(self, cr, uid, context)
service.start()
now this we start background services which run faster and give you freedom to use the UI.
Hope this will help you
Thank You

Resources