I am new to learn QRemoteObjects, i understand usage of Direct Connection with a Dynamic Replica.But i don't understand Connections to Remote Nodes using a Registry mechanism.I got confused the relationship between QRemoteObjectRegistryHost, QRemoteObjectHost, QRemoteObjectNode and QRemoteObjectReplica, can anyone give me simple explanation?
In Registry method
server use code like this
regNode = QRemoteObjectRegistryHost(QUrl('local:registry'))
srcNode = QRemoteObjectHost(QUrl('local:replica'), QUrl('local:registry'))
#is there will create two Local Socket server?
client use
repNode = QRemoteObjectNode(QUrl('local:registry'))
What's the difference QUrl('local:registry') and QUrl('local:replica')?
And I think QRemoteObjectHost(QUrl('local:replica'), QUrl('local:registry')) is redundant in this method.
In the example you provide the advantage is not observed and therefore you see it redundant.
In some applications there is a need to have several sources and it would be redundant for the replicas to have to connect to each source, so the task of QRemoteObjectRegistryHost is to have a connection point for several sources and that the replicas are connected through it.
For example, the following scheme shows its use:
┌-------------------┐ ┌-------------------┐
| QRemoteObjectHost | | QRemoteObjectHost |
└--------┬----------┘ └-------┬-----------┘
| |
| |
┌----┴-----------------------------------┴----┐
| QRemoteObjectRegistryHost |
└--┬-------------------┬-----------------┬----┘
| | |
| | |
┌-------┴----- ---┐ ┌-------┴---------┐ ┌-----┴------- ---┐
|QRemoteObjectNode| |QRemoteObjectNode| |QRemoteObjectNode|
└-----------------┘ └-----------------┘ └-----------------┘
Multiple nodes can be registered through QRemoteObjectHost, and the QRemoteObjectHost is registered in the QRemoteObjectRegistryHost so that any QRemoteObjectNode can obtain replicas of the QRemoteObjectHost nodes through QRemoteObjectRegistryHost.
To illustrate the functionality I created the following example:
├── register.py
├── replica.py
└── source.py
register.py
from PyQt5 import QtCore, QtRemoteObjects
if __name__ == "__main__":
import sys
app = QtCore.QCoreApplication(sys.argv)
regNode = QtRemoteObjects.QRemoteObjectRegistryHost(
QtCore.QUrl("tcp://127.0.0.1:5557")
)
sys.exit(app.exec_())
replica.py
from functools import partial
import sys
from PyQt5 import QtCore, QtRemoteObjects
if __name__ == "__main__":
app = QtCore.QCoreApplication(sys.argv)
node = QtRemoteObjects.QRemoteObjectNode(QtCore.QUrl("tcp://127.0.0.1:5557"))
replicas = []
def on_remoteObjectAdded(info):
name, url = info
print("object added", name, url)
replica = node.acquireDynamic(name)
wrapper = partial(on_initialized, replica, name)
replica.initialized.connect(wrapper)
replicas.append(replica)
node.registry().remoteObjectAdded.connect(on_remoteObjectAdded)
def on_initialized(replica, name):
wrapper = partial(print, name)
replica.dataChanged.connect(wrapper)
sys.exit(app.exec_())
source.py
import sys
from PyQt5 import QtCore, QtRemoteObjects
class Node(QtCore.QObject):
dataChanged = QtCore.pyqtSignal(str)
if __name__ == "__main__":
app = QtCore.QCoreApplication(sys.argv)
parser = QtCore.QCommandLineParser()
parser.addPositionalArgument("url", "Host URL different to tcp://127.0.0.1:5557")
parser.addPositionalArgument("name", "Name of node")
parser.process(app)
args = parser.positionalArguments()
if len(args) != 2:
print("only url and name is required")
sys.exit(-1)
url, name = args
if QtCore.QUrl("tcp://127.0.0.1:5557") == QtCore.QUrl(url):
print("url different tcp://127.0.0.1:5557")
sys.exit(-1)
node = Node()
srcNode = QtRemoteObjects.QRemoteObjectHost(
QtCore.QUrl(url), QtCore.QUrl("tcp://127.0.0.1:5557")
)
srcNode.enableRemoting(node, name)
def on_timeout():
data = QtCore.QDateTime.currentDateTime().toString()
node.dataChanged.emit(data)
timer = QtCore.QTimer(interval=1000, timeout=on_timeout)
timer.start()
sys.exit(app.exec_())
Then run the following commands on different CMDs/terminals:
python register.py
python replica.py
python source.py tcp://127.0.0.1:5558 node1
python source.py tcp://127.0.0.1:5559 node2
And in the CMD/terminal console of replica.py you will see the following:
# ...
node1 Tue Jan 7 22:32:09 2020
node2 Tue Jan 7 22:32:09 2020
node1 Tue Jan 7 22:32:10 2020
node2 Tue Jan 7 22:32:10 2020
# ...
Related
I want to take test case results from Robot Framework runs and import those results into other tools (ElasticSearch, ALM tools, etc).
Towards that end I would like to be able to generate a text file with one line per test. Here is an example line pipe delimited:
testcase name | time run | duration | status
There are other fields I would add but those are the basic ones. Any help appreciated. I have been looking at robot.result http://robot-framework.readthedocs.io/en/3.0.2/autodoc/robot.result.html but haven't figured it out yet. If/when I do I will post answer here.
Thanks,
The output.xml file is very easy to parse with normal XML parsing libraries.
Here's a quick example:
from __future__ import print_function
import xml.etree.ElementTree as ET
from datetime import datetime
def get_robot_results(filepath):
results = []
with open(filepath, "r") as f:
xml = ET.parse(f)
root = xml.getroot()
if root.tag != "robot":
raise Exception("expect root tag 'robot', got '%s'" % root.tag)
for suite_node in root.findall("suite"):
for test_node in suite_node.findall("test"):
status_node = test_node.find("status")
name = test_node.attrib["name"]
status = status_node.attrib["status"]
start = status_node.attrib["starttime"]
end = status_node.attrib["endtime"]
start_time = datetime.strptime(start, '%Y%m%d %H:%M:%S.%f')
end_time = datetime.strptime(end, '%Y%m%d %H:%M:%S.%f')
elapsed = str(end_time-start_time)
results.append([name, start, elapsed, status])
return results
if __name__ == "__main__":
results = get_robot_results("output.xml")
for row in results:
print(" | ".join(row))
Bryan is right that it's easy to parse Robot's output.xml using standard XML parsing modules. Alternatively you can use Robot's own result parsing modules and the model you get from it:
from robot.api import ExecutionResult, SuiteVisitor
class PrintTestInfo(SuiteVisitor):
def visit_test(self, test):
print('{} | {} | {} | {}'.format(test.name, test.starttime,
test.elapsedtime, test.status))
result = ExecutionResult('output.xml')
result.suite.visit(PrintTestInfo())
For more details about the APIs used above see http://robot-framework.readthedocs.io/.
I have flask telegram bot with pyTelegramBotAPI deployed on Heroku. Need to get fresh lenght of list in the start message. List refreshing every 5 minutes in gettinglist.py. Can't find my mistake, please help.
bot.py
import config
import gettinglist
from gettinglist import getting_list
import telebot
from flask import Flask, request
from threading import Thread
def app_run():
app.run(host="0.0.0.0", port=os.environ.get('PORT', 80))
msg_start = """
Lenght of list now: %d
""" % config.LIST_LENGHT
application_thread = Thread(target=app_run)
getting_list_thread = Thread(target=getting_list)
bot = telebot.TeleBot("<MY_BOT_TOKEN>")
#bot.message_handler(commands=['start'])
def start():
cid = m.chat.id
bot.send_message(cid, msg_start, parse_mode='html')
#app.route("/bot", methods=['POST'])
def getMessage():
bot.process_new_updates([telebot.types.Update.de_json(request.stream.read().decode("utf-8"))])
return "ok", 200
#app.route("/")
def webhook():
bot.remove_webhook()
bot.set_webhook(url="<HEROKU_APP_URL>")
return "ok", 200
if __name__ == '__main__':
application_thread.start()
getting_list_thread.start()
gettinglist.py
import config
from time import sleep
LIST_LENGHT = 0
LIST = []
def getting_list():
while True:
global LIST
global LIST_LENGHT
LIST = [num for num in range(0, 100)]
config.LIST_LENGHT = len(LIST)
return LIST
sleep(300)
config.py
LIST_LENGHT = 0
I am trying to list all the keystone projects present on my setup. The snippet i am using displays only few of them.
CODE-1:
from keystoneclient.auth.identity import v3
from keystoneclient import session
from keystoneclient.v3 import client as ksclient3
auth_url = "http://192.16.66.10:5000/v3"
token = '0112efcb75e9411b965b423edb321827'
auth = v3.Token(auth_url=auth_url, token=token, unscoped=True)
sess = session.Session(auth=auth)
ks = ksclient3.Client(session=sess);
project_list = [t.name for t in ks.projects.list(user=sess.get_user_id())]
print project_list
OUTPUT
[A', B', C']
CODE-2
from keystoneclient import session
from keystoneclient.v3 import client
from keystoneclient.auth.identity import v3
auth = v3.Password(auth_url='http://127.0.0.1:5000/v3',user_id='idm',password='idm',project_id='2545070293684905b9623095768b019d')
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
keystone.users.list()
OUTPUT
keystoneclient.exceptions.Unauthorized: The request you have made requires authentication. (HTTP 401)
EXPECTED OUTPUT
openstack project list
+----------------------------------+----------------+
| ID | Name |
+----------------------------------+----------------+
| 3efabc809570458180b2e20ce099ef1a | A |
| 546636e4532246f9a440e44deaad82d6 | B |
| 63494b0b0e164e7e82281c94efc709e4 | C |
| 71dbcec67a3e49979a9a9f519409785d | D |
| 8699a715c6834ac1a42350e593879695 | E |
| af88b7d76ab44e13ba73b80b39d2644b | F |
| b431f905a52448298980a0fe0b7751be | G |
| ba3053eb5c534052914f133aa065865d | H |
+----------------------------------+----------------+
Things i want to understand:
Why CODE-1 displays few of the them from the list
Why CODE-2 fails
How to get the keystone project IDS from keystone client
Why CODE-1 displays few of the them from the list
Your code does filter the tenants, if you like to see all tenants list do not filter them like this:
ks.projects.list()
Your filter "user=sess.get_user_id()" returns all tenants that was created by current user.
Why CODE-2 fails
I suppose the error is in args, you give user_id='idm', if you use user name, then argument should be username='idm', if you pass in arg user_id, then you need to pass user id, eg user_id='56d88dd0a3ab4c4c8d1d15534352d7de'
You can take id from horizon
http://localhost/horizon/identity/users/
In source code there are example of client creation:
from keystoneauth1.identity import v3
from keystoneauth1 import session
from keystoneclient.v3 import client
auth = v3.Password(user_domain_name=DOMAIN_NAME,
username=USER,
password=PASS,
project_domain_name=PROJECT_DOMAIN_NAME,
project_name=PROJECT_NAME,
auth_url=KEYSTONE_URL)
sess = session.Session(auth=auth)
keystone = client.Client(session=sess)
keystone.projects.list()
user = keystone.users.get(USER_ID)
user.delete()
How to get the keystone project IDS from keystone client
If you like to see all tenants ids(suppose admin credentials)
project_list = [proj.id for proj in ks.projects.list(all_tenants=True)]
I haven't quite found the answer yet in other threads with similar titles. Let's say I have a logging.conf that looks like this:
[loggers]
keys=root,analyzer
[handlers]
keys=consoleHandler,analyzerFileHandler
[formatters]
keys=consoleFormatter,logFormatter
[logger_root]
level=DEBUG
handlers=consoleHandler,analyzerFileHandler
[logger_analyzer]
level=DEBUG
handlers=consoleHandler,analyzerFileHandler
qualname=analyzer
propagate=0
[handler_consoleHandler]
class=StreamHandler
level=INFO
formatter=consoleFormatter
args=(sys.stdout,)
[handler_analyzerFileHandler]
class=FileHandler
level=DEBUG
formatter=logFormatter
args=('analyzer.log','w')
[formatter_consoleFormatter]
format=%(asctime)s - %(name)s - %(levelname)s | %(message)s
datefmt=%m/%d/%Y %X
[formatter_logFormatter]
format=%(asctime)s - %(levelname)s | %(message)s
datefmt=%m/%d/%Y %X
A logger = logging.getLogger('analyzer') will send text to the log file and the console, how can I make that sys.stdout go towards a QPlainTextEdit() widget instead of the console?
edit
OK looking at this post I made this code. The post's code is good, but for some reason it doesn't address the issue, you can comment out all the instances of logger and still end up with print events going to Qwidget. The logger, as is, has no real interaction with the rest of the program, it is just there. I thought I could rectify the problem by writing a class that took whatever I wanted as text and sent it off to print and logger respectively:
import logging
from logging.config import fileConfig
from os import getcwd
import sys
from PyQt4.QtCore import QObject,\
pyqtSignal
from PyQt4.QtGui import QDialog, \
QVBoxLayout, \
QPushButton, \
QTextBrowser,\
QApplication
class XStream(QObject):
_stdout = None
_stderr = None
messageWritten = pyqtSignal(str)
def flush( self ):
pass
def fileno( self ):
return -1
def write( self, msg ):
if ( not self.signalsBlocked() ):
self.messageWritten.emit(unicode(msg))
#staticmethod
def stdout():
if ( not XStream._stdout ):
XStream._stdout = XStream()
sys.stdout = XStream._stdout
return XStream._stdout
#staticmethod
def stderr():
if ( not XStream._stderr ):
XStream._stderr = XStream()
sys.stderr = XStream._stderr
return XStream._stderr
class XLogger():
def __init__(self, name):
self.logger = logging.getLogger(name)
def debug(self,text):
print text
self.logger.debug(text)
def info(self,text):
print text
self.logger.info(text)
def warning(self,text):
print text
self.logger.warning(text)
def error(self,text):
print text
self.logger.error(text)
class MyDialog(QDialog):
def __init__( self, parent = None ):
super(MyDialog, self).__init__(parent)
# setup the ui
self._console = QTextBrowser(self)
self._button = QPushButton(self)
self._button.setText('Test Me')
# create the layout
layout = QVBoxLayout()
layout.addWidget(self._console)
layout.addWidget(self._button)
self.setLayout(layout)
# create connections
XStream.stdout().messageWritten.connect( self._console.insertPlainText )
XStream.stderr().messageWritten.connect( self._console.insertPlainText )
self.xlogger = XLogger('analyzer')
self._button.clicked.connect(self.test)
def test( self ):
# log some stuff
self.xlogger.debug("Testing debug")
self.xlogger.info('Testing info')
self.xlogger.warning('Testing warning')
self.xlogger.error('Testing error')
if ( __name__ == '__main__' ):
fileConfig(''.join([getcwd(),'/logging.conf']))
app = None
if ( not QApplication.instance() ):
app = QApplication([])
dlg = MyDialog()
dlg.show()
The "cross logger" sends everything to the log and to the Qwidget, however it also sends everything but debug to the aptana console:
02/05/2013 17:38:42 - analyzer - INFO | Testing info
02/05/2013 17:38:42 - analyzer - WARNING | Testing warning
02/05/2013 17:38:42 - analyzer - ERROR | Testing error
While the analyzer.log has:
02/05/2013 17:38:42 - DEBUG | Testing debug
02/05/2013 17:38:42 - INFO | Testing info
02/05/2013 17:38:42 - WARNING | Testing warning
02/05/2013 17:38:42 - ERROR | Testing error
Weird that debug is the only one who doesn't make it to the Aptana console, removing consoleHandler from handlers under [logger_analyzer] in my logging.conf solves the problem of it writing out to the Aptana console, probably has something to do with the args=(sys.stdout,) under [handler_consoleHandler]. I suppose it solves my problem without having to code in a handler for the Qtext object and thus negate the logging.conf file. If someone has a more elegant solution to using a logging.conf file that somehow manages to redirect its console output to a Qwidget of your choice, please feel free to post. Thanks.
I'm creating a module for OpenERP in which I have to launch an ongoing process.
OpenERP runs in a continuous loop. My process has to be launched when I click on a button, and it has to keep running without holding up OpenERP's execution.
To simplify it, I have this code:
#!/usr/bin/python
import multiprocessing
import time
def f(name):
while True:
try:
print 'hello', name
time.sleep(1)
except KeyboardInterrupt:
return
if __name__ == "__main__":
count = 0
while True:
count += 1
print "Pass %d" % count
pool = multiprocessing.Pool(1)
result = pool.apply_async(f, args=['bob'])
try:
result.get()
except KeyboardInterrupt:
#pass
print 'Interrupted'
time.sleep(1)
When executed, Pass 1 is printed once and then an endless series of hello bob is printed until CTRL+C is pressed. Then Pass 2 is obtained and so on, as shown below:
Pass 1
hello bob
hello bob
hello bob
^CInterrupted
Pass 2
hello bob
hello bob
hello bob
hello bob
I would like the passes to keep increasing in parallel with the hello bob's.
How do I do that?
Here what you can do id you can create then Multi Threaded Implementation of Python under the server memory, which will run independently then server execution thread.
Trick behind this will be used is we will fork one thread from server on your required click and we will assign all server variable separate copy to the new Thread so that thread will execute independently and at then end of process you have to commit the transaction as this process will be not main server process. Here the small example of it how you can do it .
import pprint
import pooler
from threading import Thread
import datetime
import logging
pp = pprint.PrettyPrinter(indent=4)
class myThread(Thread):
"""
"""
def __init__(self, obj, cr, uid, context=None):
Thread.__init__(self)
self.external_id_field = 'id'
self.obj = obj
self.cr = cr
self.uid = uid
self.context = context or {}
self.logger = logging.getLogger(module_name)
self.initialize()
"""
Abstract Method to be implemented in the real instance
"""
def initialize(self):
"""
init before import
usually for the login
"""
pass
def init_run(self):
"""
call after intialize run in the thread, not in the main process
TO use for long initialization operation
"""
pass
def run(self):
"""
this is the Entry point to launch the process(Thread)
"""
try:
self.init_run()
#Your Code Goes Here
#TODO Add Business Logic
self.cr.commit()
except Exception, err:
sh = StringIO.StringIO()
traceback.print_exc(file=sh)
error = sh.getvalue()
print error
self.cr.close()
LIke this you can add some code in some module like (import_base module in 6.1 or trunk)
Now what Next you can do is you can make extended implementation of this and then make instace of the service or you can directly start forking the treads like following code:
service = myServcie(self, cr, uid, context)
service.start()
now this we start background services which run faster and give you freedom to use the UI.
Hope this will help you
Thank You