Im trying to run freebase using python on Ubuntu 12.10 the first time. here's what i did
import freebase
query = {
"id" : "/en/the_beatles",
"type" : "/music/artist",
"album" : [{
"name" : None,
"release_date" : None,
"track": {
"return" : "count"
},
"sort" : "release_date"
}]
}
freebase.mqlread(query)
Here's that error i got
Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/freebase-1.0.8-py2.7.egg/freebase/api/session.py", line 597, in mqlread
r = self._httpreq_json(service, 'POST', form=dict(query=qstr))
File "/usr/local/lib/python2.7/dist-packages/freebase-1.0.8-py2.7.egg/freebase/api/session.py", line 420, in _httpreq_json
resp, body = self._httpreq(*args, **kws)
File "/usr/local/lib/python2.7/dist-packages/freebase-1.0.8-py2.7.egg/freebase/api/session.py", line 406, in _httpreq
return self._http_request(url, method, body, headers)
File "/usr/local/lib/python2.7/dist-packages/freebase-1.0.8-py2.7.egg/freebase/api/httpclients.py", line 66, in call
self.log.error('SOCKET FAILURE: %s', e.fp.read())
AttributeError: 'error' object has no attribute 'fp'
Could anyone help me resolve this?
Thansk in advance
If you're using the old Python client library it won't work because Google never migrated it to work with the new APIs. You'll need to use the standard Google APIs Python library and the discovery interface.
https://developers.google.com/api-client-library/python/start/get_started
Related
Currently, I am facing with dagster.core.errors.PartitionExecutionError but error logs from Dagster seem not obvious to me.
dagster.core.errors.PartitionExecutionError: Error occurred during the evaluation of the `run_config_for_partition` function for partition set download_firebase_data_local_partition_set
File "/Users/bryan/miniconda3/envs/dagster-injector/lib/python3.9/site-packages/dagster/grpc/impl.py", line 292, in get_partition_config
return ExternalPartitionConfigData(name=partition.name, run_config=run_config)
File "/Users/bryan/miniconda3/envs/dagster-injector/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "/Users/bryan/miniconda3/envs/dagster-injector/lib/python3.9/site-packages/dagster/core/errors.py", line 192, in user_code_error_boundary
raise error_cls(
The above exception was caused by the following exception:
TypeError: daily_download_config() takes 1 positional argument but 2 were given
File "/Users/bryan/miniconda3/envs/dagster-injector/lib/python3.9/site-packages/dagster/core/errors.py", line 185, in user_code_error_boundary
yield
File "/Users/bryan/miniconda3/envs/dagster-injector/lib/python3.9/site-packages/dagster/grpc/impl.py", line 291, in get_partition_config
run_config = partition_set_def.run_config_for_partition(partition)
File "/Users/bryan/miniconda3/envs/dagster-injector/lib/python3.9/site-packages/dagster/core/definitions/partition.py", line 441, in run_config_for_partition
return copy.deepcopy(self._user_defined_run_config_fn_for_partition(partition))
File "/Users/bryan/miniconda3/envs/dagster-injector/lib/python3.9/site-packages/dagster/core/definitions/time_window_partitions.py", line 192, in <lambda>
run_config_for_partition_fn=lambda partition: fn(
My current setup is
#graph
def download():
"""
Download data from BigQuery then upload to S3
"""
extract_data_in_date()
#daily_partitioned_config(start_date=datetime(2021, 12, 1))
def daily_download_config(date: datetime):
return {
"resources": {
"date": date.strftime("%Y-%m-%d")
}
}
download_local_job = download.to_job(
name=f'{NAME}_local',
resource_defs={
**{
"date": make_values_resource(date=str),
"project_name": ResourceDefinition.hardcoded_resource("test-123")
},
**RESOURCES_LOCAL,
},
config=daily_download_config,
executor_def=in_process_executor
)
I am not sure where I am wrong, can you please help
#daily_paritioned_config needs to be able to accept two arguments, one for the start of the time window and one for the end. daily_download_config doesn't actually make use of this end date value, but it still needs to show up in the signature because Dagster will try to pass two arguments to this function regardless
I made the web service to communicate two applications odoo12 and drupal. when i try to retrieve a report in odoo12 from drupal, i get this error message:
-Drupal:
Le site Web a rencontré une erreur inattendue. Veuillez essayer de nouveau plus tard.</br></br><em class="placeholder">Zend\XmlRpc\Client\Exception\FaultException</em>: Traceback (most recent call last):
File "C:\odoo-12.0\odoo\addons\base\controllers\rpc.py", line 63, in xmlrpc_2
response = self._xmlrpc(service)
File "C:\odoo-12.0\odoo\addons\base\controllers\rpc.py", line 43, in _xmlrpc
result = dispatch_rpc(service, method, params)
File "C:/odoo-12.0\odoo\http.py", line 121, in dispatch_rpc
result = dispatch(method, params)
File "C:/odoo-12.0\odoo\service\model.py", line 34, in dispatch
raise NameError("Method not available %s" % method)
NameError: Method not available report
in <em class="placeholder">Zend\XmlRpc\Client->call()</em> (line <em class="placeholder">325</em> of <em class="placeholder">vendor\zendframework\zend-xmlrpc\src\Client.php</em>). <pre class="backtrace">Jsg\Odoo\Odoo->getReport('crm_ong.report_recufiscal', 0, 'qweb-pdf') (Line: 124)
-Odoo:
Traceback (most recent call last):
File "C:/odoo-12.0\odoo\http.py", line 121, in dispatch_rpc
result = dispatch(method, params)
File "C:/odoo-12.0\odoo\service\model.py", line 34, in dispatch
raise NameError("Method not available %s" % method)
NameError: Method not available report
-code drupal
public function submitForm(array &$form, FormStateInterface $form_state) {
global $id_don;
global $client;
$id_don = (int) $form_state->getValues()['id_don'];
$model = "crm.alima.don";
$ids = [$id_don];
$report_data=$client->getReport('crm_solthis.report_recufiscal', $id_don, 'qweb-pdf');
header('Content-Type: application/pdf');
echo $report_data;die();
header('Content-Type: text/css');
header("Content-Disposition: attachment; filename=RecuFiscal.pdf");
}
The report service has been removed from Odoo since version 11.0.
Relevant commits : c23ef9a, 3425752.
I just inspected Odoo client used by Drupal and it appears the code doesn't take these changes into account :
# from function getReport()
$client = $this->getClient('report');
$reportId = $client->call('report', $params);
To fix your issue, don't use getReport, I guess it's still possible to grab some data for your model and print kind of a report by tweaking the method from the client.
I suggest to switch to the object endpoint to get a generic XmlRpcClient on which you might be able to call render().
For example you can use search() to get a reportId in the first place (no more report service but ir.actions.report model still available), and then try to read/render it like in this example (this is not 'client' code relative to Odoo but you get the idea).
I use pykafka to fetch message from kafka topic, and then do some process and update to mongodb. As the pymongodb can update only one item every time, so I start 100 processes. But when starting, some processes occoured errors "PartitionOwnedError and ConsumerStoppedException". I don't know why.
Thank you.
kafka_cfg = conf['kafka']
kafka_client = KafkaClient(kafka_cfg['broker_list'])
topic = kafka_client.topics[topic_name]
balanced_consumer = topic.get_balanced_consumer(
consumer_group=group,
auto_commit_enable=kafka_cfg['auto_commit_enable'],
zookeeper_connect=kafka_cfg['zookeeper_list'],
zookeeper_connection_timeout_ms = kafka_cfg['zookeeper_conn_timeout_ms'],
consumer_timeout_ms = kafka_cfg['consumer_timeout_ms'],
)
while(1):
for msg in balanced_consumer:
if msg is not None:
try:
value = eval(msg.value)
id = long(value.pop("id"))
value["when_update"] = datetime.datetime.now()
query = {"_id": id}}
result = collection.update_one(query, {"$set": value}, True)
except Exception, e:
log.error("Fail to update: %s, msg: %s", e, msg.value)
>
Traceback (most recent call last):
File "dump_daily_summary.py", line 182, in <module>
dump_daily_summary.run()
File "dump_daily_summary.py", line 133, in run
for msg in self.balanced_consumer:
File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 745, in __iter__
message = self.consume(block=True)
File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 734, in consume
raise ConsumerStoppedException
pykafka.exceptions.ConsumerStoppedException
>
Traceback (most recent call last):
File "dump_daily_summary.py", line 182, in <module>
dump_daily_summary.run()
File "dump_daily_summary.py", line 133, in run
for msg in self.balanced_consumer:
File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 745, in __iter__
message = self.consume(block=True)
File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 726, in consume
self._raise_worker_exceptions()
File "/data/share/python2.7/lib/python2.7/site-packages/pykafka-2.5.0.dev1-py2.7-linux-x86_64.egg/pykafka/balancedconsumer.py", line 271, in _raise_worker_exceptions
raise ex
pykafka.exceptions.PartitionOwnedError
PartitionOwnedError: check if there are some background process consuming in the same consumer_group, maybe there are not enough available partitions for starting another consumer.
ConsumerStoppedException: you can try upgrading your pykafka version (https://github.com/Parsely/pykafka/issues/574)
I met the same problem like you. But, I confused about others' solutions like adding enough partitions for consumers or updating the version of pykafka.
In fact, mine satisfied those conditions above.
Here is the version of tools:
python 2.7.10
kafka 2.11-0.10.0.0
zookeeper 3.4.8
pykafka 2.5.0
Here is my code:
class KafkaService(object):
def __init__(self, topic):
self.client_hosts = get_conf("kafka_conf", "client_host", "string")
self.topic = topic
self.con_group = topic
self.zk_connect = get_conf("kafka_conf", "zk_connect", "string")
def kafka_consumer(self):
"""kafka-consumer client, using pykafka
:return: {"id": 1, "url": "www.baidu.com", "sitename": "baidu"}
"""
from pykafka import KafkaClient
consumer = ""
try:
kafka = KafkaClient(hosts=str(self.client_hosts))
topic = kafka.topics[self.topic]
consumer = topic.get_balanced_consumer(
consumer_group=self.con_group,
auto_commit_enable=True,
zookeeper_connect=self.zk_connect,
)
except Exception as e:
logger.error(str(e))
while True:
message = consumer.consume(block=False)
if message:
print "message:", message.value
yield message.value
The two exceptions(ConsumerStoppedException and PartitionOwnedError), are raised by the function consum(block=True) of pykafka.balancedconsumer.
Of course, I recommend you to read the source code of that function.
There is a argument block=True, after altering it to False, the programme can not fall into the exceptions.
Then the kafka consumers work fine.
This behavior is affected by a longstanding bug that was recently discovered and is currently being fixed. The workaround we've used in production at Parse.ly is to run our consumers in an environment that handles automatically restarting them when they crash with these errors until all partitions are owned.
I've been trying to create an OpenStack image informing the Kernel Id and Ramdisk Id, using the OpenStack Unified SDK (https://github.com/openstack/python-openstacksdk), but without success. I know this is possible, because the OpenStack CLI have this parameters, as shown on this page (http://docs.openstack.org/cli-reference/glance.html#glance-image-create), where the CLI have the "--kernel-id" and "--ramdisk-id" parameters. I've used this parameter in the terminal and confirmed they work, but I need to use them in python.
I'm trying to use the upload_method, as described here http://developer.openstack.org/sdks/python/openstacksdk/users/proxies/image.html#image-api-v2 but I can't get the attrs parameter right. Documentation only say it is suposed to be a dictionary. Here is the code I'm using
...
atrib = {
'properties': {
'kernel_id': 'd84e1f2b-8d8c-4a4a-8858-77a8d5a93cb1',
'ramdisk_id': 'cfef18e0-006e-477a-a098-593d43435a1e'
}
}
with open(file) as fimage:
image = image_service.upload_image(
name=name,
data=fimage,
disk_format='qcow2',
container_format='bare',
**atrib)
....
And here is the error I'm getting:
File "builder.py", line 121, in main
**atrib
File "/usr/lib/python2.7/site-packages/openstack/image/v2/_proxy.py", line 51, in upload_image
**attrs)
File "/usr/lib/python2.7/site-packages/openstack/proxy2.py", line 193, in _create
return res.create(self.session)
File "/usr/lib/python2.7/site-packages/openstack/resource2.py", line 570, in create
json=request.body, headers=request.headers)
File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 675, in post
return self.request(url, 'POST', **kwargs)
File "/usr/lib/python2.7/site-packages/openstack/session.py", line 52, in map_exceptions_wrapper
http_status=e.http_status, cause=e)
openstack.exceptions.HttpException: HttpException: Bad Request, 400 Bad Request
Provided object does not match schema 'image': {u'kernel_id': u'd84e1f2b-8d8c-4a4a-8858-77a8d5a93cb1', u'ramdisk_id': u'cfef18e0-006e-477a-a098-593d43435a1e'} is not of type 'string' Failed validating 'type' in schema['additionalProperties']: {'type': 'string'} On instance[u'properties']: {u'kernel_id': u'd84e1f2b-8d8c-4a4a-8858-77a8d5a93cb1', u'ramdisk_id': u'cfef18e0-006e-477a-a098-593d43435a1e'}
Already tried to use the update_image method, but without success, passing kernel id and ramdisk id as a strings creates the instance, but it does not boot.
Does anyone knows how to solve this?
what the version of the glance api you use?
I have read the code in openstackclient/image/v1/images.py, openstackclient/v1/shell.py
## in shell.py
def do_image_create(gc, args):
...
fields = dict(filter(lambda x: x[1] is not None, vars(args).items()))
raw_properties = fields.pop('property')
fields['properties'] = {}
for datum in raw_properties:
key, value = datum.split('=', 1)
fields['properties'][key] = value
...
image = gc.images.create(**fields)
## in images.py
def create(self, **kwargs):
...
for field in kwargs:
if field in CREATE_PARAMS:
fields[field] = kwargs[field]
elif field == 'return_req_id':
continue
else:
msg = 'create() got an unexpected keyword argument \'%s\''
raise TypeError(msg % field)
hdrs = self._image_meta_to_headers(fields)
...
resp, body = self.client.post('/v1/images',
headers=hdrs,
data=image_data)
...
and openstackclient/v2/shell.py,openstackclient/image/v2.images.py(and i have debuged this too)
## in shell.py
def do_image_create(gc, args):
...
raw_properties = fields.pop('property', [])
for datum in raw_properties:
key, value = datum.split('=', 1)
fields[key] = value
...
image = gc.images.create(**fields)
##in images.py
def create(self, **kwargs):
"""Create an image.""
url = '/v2/images'
image = self.model()
for (key, value) in kwargs.items():
try:
setattr(image, key, value)
except warlock.InvalidOperation as e:
raise TypeError(utils.exception_to_str(e))
resp, body = self.http_client.post(url, data=image)
...
it seems that, you can create a image use your way in version 1.0, but in version 2.0, you should use the kernel_id and ramdisk_id as below
atrib = {
'kernel_id': 'd84e1f2b-8d8c-4a4a-8858-77a8d5a93cb1',
'ramdisk_id': 'cfef18e0-006e-477a-a098-593d43435a1e'
}
but the OpenStack SDK seems it can't trans those two argments to the url (because there is no Body define in openstack/image/v2/image.py. so you should modify the OpenStack SDK to support this.
BTW, the code of OpenStack is a little different from it's version, but many things are same.
I'm using the following code in my server program:
class AddLibSong:
def PUT(self):
db = MahData.getDBConnection()
songs = json.loads(web.input().to_add)
addToLibrary(songs)
return
But for some reason when I do a PUT with the data:
"to_add=[ { "album" : "Unknonwn", "artist" : "Unknonwn", "host_lib_id" : "1", "is_deleted" :
"false", "server_lib_id" : "-1", "song" : "Moneytalks" } ]"
I get the following error:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/web/application.py", line 237, in process
return self.handle()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/web/application.py", line 228, in handle
return self._delegate(fn, self.fvars, args)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/web/application.py", line 409, in _delegate
return handle_class(cls)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/web/application.py", line 385, in handle_class
return tocall(*args)
File "/Users/kurtis/sandbox/udj/webserver/Library.py", line 114, in PUT
song = json.loads(web.input().to_add)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/web/utils.py", line 76, in __getattr__
raise AttributeError, k
AttributeError: 'to_add'
127.0.0.1:51096 - - [29/Sep/2011 19:02:58] "HTTP/1.1 PUT /add_songs_to_library" - 500 Internal Server Error
Anybody know why this is? I think I saw something about Web.py begin only able to get input if given a POST or GET but I didn't see anything in the source code that should prevent this.
Anyway, if you want more details on how to use PUT with WebPy I would advice you this great link.
To make it work on the last version of webpy you should change the "main" code to that:
if __name__ == "__main__":
app=web.application(urls, globals())
app.run()