I was wondering if anyone had actually used the official python elastic search ES client in the context of flask-socketio (with either eventlet / gevent + the monkey patches)?
The official python ES client is multi-thread safe, but that is not too much help in a single threaded async environment like flask-scoketio (+eventlet/gevent).
Would setting up an pool like eventlet has for db connections (eventlet/db_pool) be the way to go here?
Thanks -
Related
Current Model: One jar file has a Jetty Server embedded which runs on 1 port. I use continuations to suspend HTTP threads and do some actions on timeout.
Jetty Version: 9.2.14
Objective Model: Multiple Instances of Jar must run on the same port (Different Context). For this, I am planning to use jetty as webcontainer. I need to be internally able to still use continuations for suspending Requests. But since Jetty server starting the jars is not inside the jar, can i still use continuations?
I think i am going fundamentally wrong but not able to grasp it. I am open to upgrade Jetty. I read about Jetty 9.3 having inherent suspend ability in each IO requests. But i did not find ample material in the documentation to understand. Can someone help either with an answer or point me to the right direction?
This is my scenario..
1) A REST based web service (Say X) takes in requests and puts it into ActiveQ
2) There is a listener on the other side of the Q that will read and process the message. This is async
I decided to go with ActiveMQ.
But trying to find a solution where I can Q and the Q listeners scalable.
1) I have many instances of X running. Hence there are multiple prodders to the Q.
2) Ordering is important to me.
3) Since my REST service is session less, I don't have a way to tag a bunch of requests with the same message ID.
4) Now if I use a single Q, it works fine..
But I want to scale it up and use multiple Q and multiple Q consumers without compromising on the order.
Can someone suggest me a solution to this problem?
Thanks much,
To achieve ordering of messages there are 2 ways defined in activeMQ
1)MessageGroups based on JMSXGroupID
2)Exclusive Consumer
Message groups are more useful than Exclusive consumer,as internally Exclusive Consumer only uses one consumer at a time,but in case of that consumer failure it connects to other consumer.
You can read the ActiveMQ documentation for the same here
http://activemq.apache.org/how-do-i-preserve-order-of-messages.html
hope this helps!
Good luck!
We can use following kind of connection URL to utilize ActiveMQ load....
failover://(tcp://192.nnn.nn.nn:61616,tcp://192.nnn.nn.nn:61616)?randomize=false
randomize=true will made message shuffles between two AciveMQ in active mode, rather not by just fail-over of ActiveMQ......
Complete reference for this can be found under the following Apache Site link....
http://activemq.apache.org/failover-transport-reference.html
But Still high availability (i.e, cluster) configuration make things stable for your App although Apache must advance ActiveMQ High Availability, hence things can work smoother.
Although because of KahaDB restriction load balancing/fault tolerant configuration is restricted. The present Apache ActiveMQ High Availability configuration available in the following link.
http://activemq.apache.org/clustering.html
However KahaDB has file lock restriction, following tweaking/alternates ways of configuration can be done...
1)Shared File System Master Slave,- A shared file system such as a SAN
http://activemq.apache.org/shared-file-system-master-slave.html
2)JDBC Master Slave,- A Shared database
http://activemq.apache.org/jdbc-master-slave.html
3)Replicated LevelDB Store,- ZooKeeper Server
http://activemq.apache.org/replicated-leveldb-store.html
Over & above by having JCA connectors,- AS like JBoss, Weblogic, Websphere, Geronimo, Glassfish,- ActimeMQ patching as a kind of Resource Adapter can be done. And with Apache Camel (karaf), JBoss Fuse ESB kind of products HA & clustering of ActiveMQ can be done.
When Nginx is used as a reverse proxy so that the client connects to Nginx and Nginx load balances or otherwise redirects the request to a backend worker via CGI etc... what is it called and how is it implemented when the worker responds directly to the client bypassing Nginx?
The source of my question is from two places. a) erlangonxen uses Nginx and a "spawner" app to launch a huge volume of instant-on workers. However, the response still passes through the spawner (an expensive step); b) I recently scanned an article that described this solution but I can no longer find it.
You got your jargon mixed I believe, so I'm going to ignore the proxy bit and assume this is about CGI. In that case you should be looking for fast CGI solutions. Nginx has support for fast CGI built in.
This spawner as you call it, is meant to provide concurrency, so that multiple CGI requests can be handled in parallel, without having to spawn an interpreter for each request. Instead the workers get spawned and ideally live forever.
If the selection of an available worker really is a performance bottleneck, then the implementation of this fast CGI daemon is severely lacking and you should look for a better solution. Worker selection should be a fraction of the time of the workers job.
I'm not sure if it's a jargon thing. The good news (for me anyway) is that I had read the articles and seen the diagrams... I just could not remember where. So reverse proxy not withstanding... I was looking for a "direct server request" (DSR) and the spawner from the erlangonxen project.
I'm not certain whether ot not these two technologies are going to work together. The DSR seems to have fallen out of favor and I'll probably not use it al all although in the given architecture it would seem to make sense to try. a) limits the total number of trips and sockets; b) really allows for some functions like gzip to be distributed nicely
Anyway, "found it".
I have a project which needs to make a tcp connection to an external source. Each worker thread will be sending messages to this external service.
I'm wondering how I can do this without having a connection be brought up and torn down for every request. I'm pretty sure the pymongo module does something similar but I can't find any documentation on it. Would it be possible to set up some kind of thread-safe queue and have a separate thread consume that queue? I understand I could probably use gearman for this, but I'd like to avoid having another moving part in the system.
uWSGI has a thread-safe process-shared queueing system (http://projects.unbit.it/uwsgi/wiki/QueueFramework) but are you sure using simple python threading.Queue class is not enough ?
I normally works in asp.net. But recently I was testing Google App Engine and I found TaskQueues: it's very interesting and powerful. Does anyone know a similar service for asp.net?
I know MSQueue but it's not what I need. I need something like GAE TaskQueue: I put an URL in queue and the URL is triggered (based on queue config).
TyphoonAE is using RabbitMQ to simulate the taskqueue, RabbitMQ provides a .Net client.
http://www.rabbitmq.com
You could try Quartz.NET maybe - http://quartznet.sourceforge.net/
Apache ActiveMQ from version 5.4 has a persistent scheduler built into the message broker.
http://activemq.apache.org/delay-and-schedule-message-delivery.html
ActiveMQ supports a variety of Cross Language Clients and Protocols from Java, C, C++, C#, Ruby, Perl, Python, PHP.
You can set a message to wait with an initial delay, and then repeat delivery 10 times, waiting 10 seconds between each re-delivery
You can also use CRON to schedule a message