Not able to create async connection using SelectConnection from RabbitMQ - asynchronous

Hello everyone,
I am using below code block to create an async connection. I am just looking to create an async connection and create a channel. I am working with pika version: 1.2.1
URL = "http://guest:guest#0.0.0.0:15672"
connection = pika.SelectConnection(
pika.URLParameters(URL)
)
nonBlockingChan = connection.channel()
When I am calling channel(), I am getting below error,
pika.exceptions.ConnectionWrongStateError: Channel allocation requires an open connection: - <SelectConnection INIT transport=None params=>
Server is correctly running in local and I could open the rabbitMQ webUI. Can some one please help me on this on how to resolve this issue? Find the below screen shot for rabbitMQ webUI.

Please refer to the complete examples here:
https://github.com/pika/pika/tree/main/examples
https://github.com/pika/pika/blob/main/examples/asynchronous_consumer_example.py
https://github.com/pika/pika/blob/main/examples/asynchronous_publisher_example.py
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Related

Missing videoTrack in a multitrack stream in Ant media server 2.4.1

We have a Multitrack web conference implementation using AMS 2.4.1 version. Its working great for our use case, except in one scenario. When there are N (< 3) number of users and they on there camera simultaneously, then few remote users are not rendered as we don't receive the video tracks for those users in newStreamAvailable. We only receive the audio track for those users. We are able to reproduce this quite frequently.
As a backup, I am trying to poll AMS using getTrackList with the main track Id to get all available streams, but I am not getting any message trackList
var jsCmd =
{
command : "getTrackList",
streamId : streamId, // this is roomId or main track id
token : token
}
Any insight would be helpful.
Thanks,
We were able to resolve the issue, posting here to help anyone who might be facing a similar issue.
With push notifications from the server, we might encounter issues when for some reason push operation doesn't succeed. In that case, it's better to have a backup plan to pull from the server and sync.
The Ant Media Server suggests pulling the server periodically for the room info. The server will respond with active streams and the application should synchronize.
For reference, please refer to following link https://resources.antmedia.io/docs/webrtc-websocket-messaging-reference

EmrCreateJobFlowOperator returns "ERROR - The conn_id `emr_default` isn't defined"

I am using Airflow in EKS for a project. I am using the EmrCreateJobFlowOperator to create a new EMR cluster. When the Dag runs the step fails and I get an error:
{taskinstance.py:1150} ERROR - The conn_id emr_default isn't defined
Here is my step in the code:
job_flow_creator = EmrCreateJobFlowOperator(
task_id='create_job_flow',
job_flow_overrides=JOB_FLOW_OVERRIDES,
aws_conn_id='aws_default',
emr_conn_id='emr_default',
dag=dag
)
I am using Airflow 1.10.11, and I know this is probably not an issue in later versions, but it's difficult for me to upgrade the version at the moment. I found other threads that advise on going into Airflow Connections and adding a new connection, but I don't know how to set that connection up and I cannot find resources on the subject.
Any help is appreciated, thanks!
Yes. You should create conection of emr_default of the right type for the operator (you have to pick the right one from the list) .
Here is a detailed instruction on what to do. This is is "1.10.11" Airlfow documentation and if you need any other airflow resources and docs you can always go there and use "Saerch" functionality. I got to that page by choosing Airlfow 1.10.11 version and searching for "connection".
https://airflow.apache.org/docs/apache-airflow/1.10.11/howto/connection/index.html?highlight=connections
Try the following connection of emr_default that works for me.
Connection Id: emr_default
Connection Type: Amazon Elastic MapReduce
Login: access_key
Password: secret_key
Extra: {"region_name": "eu-west-3"}
Replace "eu-west-3" with your region

Could not create internal topics - Stream-thread exception

I am trying to execute a simple Wordcount stream application but I face the error "Could not create internal topics - Stream-thread exception"
I have seen a similar thread but that seems to be more of a network issue.
Here is no security enabled on the kafka broker.
Only one broker is configured and still this issue.
Can someone let me know how to fix this?
Clean your temporary kafka queues.
Run --list command on kafka to see all the queues starting with your names and ending with -changelog & -repartition and manually run delete on them.
This one worked for me.
Also, check your settings on delete.topic.enable for actual deletion happening. It was not the default setting until 1.0.0 - see https://issues.apache.org/jira/browse/KAFKA-5384
i have connected to kafka using kafka tool and delete them manually

how to use the example of scrapy-redis

I have read the example of scrapy-redis but still don't quite understand how to use it.
I have run the spider named dmoz and it works well. But when I start another spider named mycrawler_redis it just got nothing.
Besides I'm quite confused about how the request queue is set. I didn't find any piece of code in the example-project which illustrate the request queue setting.
And if the spiders on different machines want to share the same request queue, how can I get it done? It seems that I should firstly make the slave machine connect to the master machine's redis, but I'm not sure which part to put the relative code in,in the spider.py or I just type it in the command line?
I'm quite new to scrapy-redis and any help would be appreciated !
If the example spider is working and your custom one isn't, there must be something that you have done wrong. Update your question with the code, including all relevant parts, so we can see what went wrong.
Besides I'm quite confused about how the request queue is set. I
didn't find any piece of code in the example-project which illustrate
the request queue setting.
As far as your spider is concerned, this is done by appropriate project settings, for example if you want FIFO:
# Enables scheduling storing requests queue in redis.
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
# Don't cleanup redis queues, allows to pause/resume crawls.
SCHEDULER_PERSIST = True
# Schedule requests using a queue (FIFO).
SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.SpiderQueue'
As far as the implementation goes, queuing is done via RedisSpider which you must inherit from your spider. You can find the code for enqueuing requests here: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/scheduler.py#L73
As for the connection, you don't need to manually connect to the redis machine, you just specify the host and port information in the settings:
REDIS_HOST = 'localhost'
REDIS_PORT = 6379
And the connection is configured in the ċonnection.py: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/connection.py
The example of usage can be found in several places: https://github.com/darkrho/scrapy-redis/blob/a295b1854e3c3d1fddcd02ffd89ff30a6bea776f/scrapy_redis/pipelines.py#L17

TB.Socket error with OpenTok WebRTC on Meteor

Got a tough one here.
So, we're trying to upgrade an OpenTok video chat application from Flash to WebRTC, and are running into socket errors as we try to implement the 'helloworld' WebRTC sample. The errors occur when we try to do a session.connect() call, not when we request a sessionId or a token. And the error basically looks like this (session_id and partner_id anonymized):
SessionInfo Response:
#document
<sessions>​
<Session>​
<session_id>​asfgdagbasdfovnwoinvcwoinvoiandfvoinvoidnofgfdfgfgivniodfnv-sdfgdfgdfg-​</session_id>​
<partner_id>​1234567890​</partner_id>​
<create_dt>​Sun Sep 01 12:00:45 PDT 2013​</create_dt>​
<session_status>​INFLIGHT​</session_status>​
<media_server_url>​…​</media_server_url>​
<p2p_server_url>​rtmfp://p2p101-oak.tokbox.com:1945/multicast​</p2p_server_url>​
<media_server_hostname>​oms409-oak.tokbox.com​</media_server_hostname>​
<messaging_server_url>​oms409-oak.tokbox.com​</messaging_server_url>​
</Session>​
</sessions>​
connectToMessenger
WebSocket error: undefined
TB.Socket Error :: The socket to oms409-oak.tokbox.com received an error: Unknown Error
TB.exception :: title: Connect Failed (1006) msg: TB.Socket Error :: The socket to oms409-oak.tokbox.com received an error: Unknown Error
Any ideas on what might be causing this? We're testing on the latest version of Chrome 29, and it happens in both localhost and on our production servers. So it doesn't seem to be a firewall. The one thing I can think of is that we're running on a Meteor/Node.js framework, which has websockets enabled by default. The code is pretty much boilerplate helloworld sample from the following:
http://tokbox.com/opentok/tutorials/hello-world/js/demo.html
We get the sessionId and token successfully, it's just that the session.connect() doesn't ever happen (and, thus, we can't ever get our connection object or subscribe to the event listeners).
Any ideas on how we might go about debugging this issue?
Thanks in advance for any help!
abigail
In typical fashion, after I spend two days on a bug, get so frustrated that I post a question to StackOverflow, and then figure it out an hour later.
Long story short, the OpenTok account had an 'enable WebRTC' option that wasn't set. It was an account administrator issue. Long story short... make sure devs have access to the accounts the business folks have!
I think you might be using the flash js library instead of the webrtc library. If you had joined your session using flash, it will not be able to work with webrtc.
Here's the webrtc library:
<script src='https://swww.tokbox.com/webrtc/v2.0/js/TB.min.js'></script>
Here is the flash library:
<script src='https://swww.tokbox.com/v1.1/js/TB.min.js'></script>
Think of webrtc and flash as two separate products, they do not interoperate.

Resources