Blocking get aio_pika - asynchronous

I expected
await queue.get()
to be blocking in aio_pika, but even when I don't set the timeout parameter I instantly get an error:
aio_pika.exceptions.QueueEmpty
Anyway to get a blocking get in aio_pika?
EDIT:
This is the best I could come up with so far.
while True:
msg = await q.get(fail=False)
if msg:
break
await asyncio.sleep(1)

The RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
You should use a very long timeout, according to the documentation for that method. There does not appear to be another way to do it.

Related

msal - InvalidAuthenticationToken error appears arbitrarily

I follow stackoverflow for quite some time now. In most cases the problems I encountered were already mentioned and addressed by people before me. Now, I have an issue I have not found an applicable solution to yet. It may result from my humble understanding of the issue and not knowing what I actually am looking for, so I hope you can help me to at least better understand what happens. If additional info is required to make sense, please do not hesitate to ask.
Synopsis: One user of a program I built often (not always, interestingly) gets an InvalidAuthenticationToken error from the request python package when requesting calendar events with a token
generated by the msal package while none of the other users have any issues at all.
The situation is as follows:
I built a program for a small company which has to read out the events of some of its employees. I wrote it in python and used the msal and requests packages for the part of the interaction with MS Outlook:
import msal
import requests
class OutlookClient():
def __init__(self, client_id, authority):
# client_id and authority are the respective
# aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee style ids of the app I registered at azure.
self.app = msal.PublicClientApplication(
client_id = client_id,
client_credential = None,
authority = msal.authority.AuthorityBuilder(msal.authority.AZURE_PUBLIC,authority)
)
def getToken(self, username, pw):
# credentials of some dummy employee being authenticated to access
# the employees' calendars
self.auth = self.app.acquire_token_by_username_password(username,pw,
scopes=["Calendars.Read","Calendars.Read.Shared","People.Read"]
)
return
def getCalendar(self, agentCal, startDate, endDate):
# agentCal is the id of the employee in question obtained somewhere else.
graph_data = None
if 'access_token' in self.auth:
req = "https://graph.microsoft.com/v1.0/users/"+agentCal+"/calendar/calendarView"+\
"?startDateTime="+ startDate.strftime("%Y-%m-%dT02:00")+\
"&endDateTime="+ endDate.strftime("%Y-%m-%dT23:00")+\
graph_data = requests.get(req,
headers={'Authorization': 'Bearer ' + self.auth['access_token'], 'content-type': 'application/json'}
).json()
try:
return graph_data['value']
except KeyError:
return []
Currently, three employees are testing the program in the field. One of them faces a recurring error which neither of the other users nor I can reproduce. When getCalendar gets called the request gets answered as
graph_data =
{'error':
{'code': 'InvalidAuthenticationToken',
'message': 'Access token has expired or is not yet valid.',
'innerError':
{'date': '2022-10-27T05:56:39',
'request-id': 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx',
'client-request-id': 'xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx'
}
}
}
whereas all other users and the specific user also sometimes should get a list of events. The token, however, looked fine to me:
self.auth =
{'token_type': 'Bearer',
'scope': 'Calendars.Read Calendars.Read.Shared Calendars.ReadWrite Mail.ReadWrite Mail.Send openid People.Read profile User.Read email',
'expires_in': 4581,
'ext_expires_in': 4581,
'access_token': 'eyJ0eXAiOiJKV1Q...',
'refresh_token': '0.AREA...',
'id_token': 'eyJ0eXAiOiJKV1Q...',
'client_info': 'eyJ1aWQ...',
'id_token_claims': {...}
}
I have limited opportunity to identify the issue at the user's computer, unfortunately, as they are currently overwhelmed with work and therefore not very responsive. So, before I bother them and myself with many trial and error approaches I hoped you could share some ideas.
The problem persists, as I was told, even when the program is closed and restarted.
I let the program create a log-file which stores the relevant variables such as the token etc. to see if any pattern arises but everytime a token is generated independent of whether the request for the calendars is answered correctly or incorrectly.
I thought that maybe the program gets started and after some time the token expires but in the log-file it seems to still be valid.
Sorry, it was as expected and I initially just did not check the correct things.
Indeed, the token expired and I did not see it. One solution is to check whether a request gets answered properly and if not get a token by the refresh token
if 'error' in graph_data:
self.auth = self.app.acquire_token_by_refresh_token(\
self.auth['refresh_token'],scopes)
and request again.

Firebase & Flutter (on Android) - MessagingToken is null

I'm running into trouble with some of my app's subscribers. We recently introduced the ability to send out notifications to our users using FirebaseMessaging. I should also mention that the app is on Android only.
Here is a brief section of code that we are running
updateUser(Userx user) async {
var messagingToken = await FirebaseMessaging.instance.getToken();
var packageInfo = await PackageInfo.fromPlatform();
user.messagingToken = messagingToken!;
user.appVersion = packageInfo.version;
await Users.updateUser(user);
}
It turns out that the FirebaseMessaging.instance.getToken() sometimes returns null and I can't figure out why that would be the case. The documentation also doesn't say much.
Could this be device specific? Maybe a user-setting to not allow any in-app messages?
A potential workaround is of course to null-check the token and simply accept it being null but I would like to understand the reasons behind that.
I found one more method that could be helpful, but I'm unsure about it.
Call FirebaseMessaging.instance.isSupported() first and act based on the result.

How to get discord bot to handle separate processes/ link to another bot

I am trying to create something of an application bot. I need the bot to be triggered in a generic channel and then continue the application process in a private DM channel with the applicant.
My issue is this : The bot can have only one on_message function defined. I find it extremely complicated (and inefficient) to check everytime if the on_message was triggered by a message from a DM channel vs the generic channel. Also, makes it difficult to keep track of an applicants answers. I want to check if the following is possible : Have the bot respond to messages from the generic channel as usual. If it receives an application prompt, start a new subprocess (or bot?) that handles the DMs with the applicant separately.
Is the above possible? if not, is there an alternative to handling this in a better way ?
#client.event
async def on_message(message):
if message.author == client.user:
return
if message.channel.type==discord.ChannelType.private:
await dm_channel.send("Whats your age?") ## Question 2
elif message.channel.type == discord.ChannelType.text:
if message.content.startswith('$h'):
member = message.author
if "apply" in message.content:
await startApply(member)
else:
await message.channel.send('Hello!')
# await message.reply('Hello!', mention_author=True)
async def startApply(member):
dm_channel = await member.create_dm()
await dm_channel.send("Whats your name?") ## Question 1
I have the above code as of now. I want the startApply function to trigger a new bot/subprocess to handle the DMs with an applicant.
Option 1
Comparatively speaking, a single if check like that is not too much overhead, but there are a few different solutions. First, you could try your hand at slash commands. This is library built as an extension for the discord.py library for slash commands. You could make one that only works in DM's, and then have it run from there with continuous slash commands.
Option 2
Use a webhook to start up a new bot. This is most likely more complicated, as youll have to get a domain or find some sort of free service to catch webhooks. You could use a webhook like this though to 'wake up' a bot and have it chat with the user in dm's.
Option 3 (Recommended)
Create functions that handle the text depending on the channel, and keep that if - elif in there. As i said, one if isn't that bad. If you had functions that are called in your code that handled everything, it actually should be fairly easy to deal with:
#client.event
async def on_message(message):
if message.author == client.user:
return
if message.channel.type==discord.ChannelType.private:
respondToPrivate(message)
elif message.channel.type == discord.ChannelType.text:
repondToText(message)
In terms of keeping track of the data, if this is a smaller personal project, MySQL is great and easy to learn. You can have each function store whatever data needed to the database so that you can have it stored to be looked at / safe in case of bot crash & then it will also be out of memory.

Why is Puppeteer failing simple tests with: "waiting for function failed: timeout 500ms exceeded"?

While trying to set up some simple end-to-end tests with Jest and Puppeteer, I've found that any test I write will inexplicably fail with a timeout.
Here's a simple example test file, which deviates only slightly from Puppeteer's own example:
import puppeteer from 'puppeteer';
describe('Load Google Puppeteer Test', () => {
test('Load Google', async () => {
const browser = await puppeteer.launch({
headless: false
});
const page = await browser.newPage();
await page.goto('https://google.co.uk');
await expect(page).toMatch("I'm Feeling Lucky");
await browser.close();
});
});
And the response it produces:
TimeoutError: Text not found "I'm Feeling Lucky"
waiting for function failed: timeout 500ms exceeded
I have tried adding in custom timeouts to the goto line, the test clause, amongst other things, all with no effect. Any ideas on what might be causing this? Thanks.
What I would say is happening here is that using toMatch expects text to be displayed. However, in your case, the text you want to verify is text associated with a button.
You should try something like this:
await expect(page).toMatchElement('input[value="I\'m Feeling Lucky"]');
Update 1:
Another possibility (and it's one you've raised yourself) is that the verification is timing out before the page has a chance to load. This is a common issue, from my experience, with executing code in headless mode. It's very fast. Sometimes too fast. Statements can be executed before everything in the UI is ready.
In this case you're better off adding some waitForSelector statements throughout your code as follows:
await page.waitForSelector('input[value="I\'m Feeling Lucky"]');
This will ensure that the selector you want is displayed before carrying on with the next step in your code. By doing this you will make your scripts much more robust while maintaining efficiency - these waits won't slow down your code. They'll simply pause until puppeteer registers the selector you want to interact with / verify as being displayed. Most of the time you won't even notice the pause as it will be so short (I'm talking milliseconds).
But this will make your scripts rock solid while also ensuring that things won't break if the web page is slower to respond for any reason during test execution.
You're probably using 'expect-puppeteer' package which does the toMatch expect. This is not a small deviation. The weird thing is that your default timeout isn't 30 seconds as the package's default, check that.
However, to fix your issue:
await expect(page).toMatch("I'm Feeling Lucky", { timeout: 6000 });
Or set the default timeout explicitly using:
page.setDefaultTimeout(timeout)
See here.

Memory leak while sending response from rebus handler

I saw a very strange behavior in my rebus handler which is self hosted in exe. Right after sending response using bus.send method it adds up some memory consumed by process. I tried to look up object graph using memory profile and found that rebus is holding response message in serialized format somewhere.
Object graph was showing below hierarchy to the root.
System.Message --> CachedBodyMessage --> stream
Give me some pointers if anybody is aware of this thing.
I understand that a memory leak is a grave concern, but my belief is that it is unlikely that Rebus should contain a memory leak.
This belief is rooted in the fact that I have been running Windows Service-hosted Rebus endpoints in production for 1,5 years now, and several of them (e.g. the timeout managers) have sometimes been running for several months without being restarted.
I'd like to be absolutely bulletproof sure though, so I'm willing to investigate the issue you're reporting.
You're mentioning "CachedBodyMessage" - judging by the names of fields inside System.Messaging.Message, it sounds like it's something within MSMQ. To try to reproduce your issue, I coded the following test:
[Test, Ignore("Only works in RELEASE mode because otherwise object references are held on to for the duration of the method")]
public void DoesNotLeakMessages()
{
// arrange
const string inputQueueName = "test.leak.input";
var queue = new MsmqMessageQueue(inputQueueName);
disposables.Add(queue);
var body = Encoding.UTF8.GetBytes(new string('*', 32768));
var message = new TransportMessageToSend
{
Headers = new Dictionary<string, object> { { Headers.MessageId, "msg-1" } },
Body = body
};
var weakMessageRef = new WeakReference(message);
var weakBodyRef = new WeakReference(body);
// act
queue.Send(inputQueueName, message, new NoTransaction());
message = null;
body = null;
GC.Collect();
GC.WaitForPendingFinalizers();
// assert
Assert.That(weakMessageRef.IsAlive, Is.False, "Expected the message to have been collected");
Assert.That(weakBodyRef.IsAlive, Is.False, "Expected the body bytes to have been collected");
}
which verifies that the sent transport message is collected as it should (will only do this in RELEASE mode though, because of the way DEBUG mode holds on to object references within scope)
I'll try and run the TimePrinter sample now and leave it running for a while to see if I can reproduce the issue. If you stumble upon more information about e.g. exactly which objects are leaking, it would be very helpful.
Thanks again for taking the time to report your worries to me :)
Followup:
I've modified the TimePrinter sample so that it sends 50 msg/s and includes a 64 KB random string payload with each message, and I've tracked the memory usage for almost four hours now. As you can see, it does not look like memory is being leaked.
I'll leave it running the rest of the day, just to be sure.
Maybe you can tell me some more about why you suspected there was a memory leak in the first place?
Update:
As you can see from the trace, it has now been running for 7 hours and thus more than 1,200,000 messages containing more than 70 GB of data has been sent and consumed by the same process. If cached message bodies were leaking, I am pretty sure that we would have been able to see something rising on the graph.

Resources