I'm trying to generate a key-pair for Kadena blockchain from the seed phrase I generated using BIP-39. I found a Cardano Crypto fork (https://github.com/kadena-io/cardano-crypto.js/blob/c50fb8c2fcd4e8d396506fb0c07de9d658aa1bae/kadena-crypto.js#L336) that Kadena uses in Chainweaver for this purpose but there isn't any documentation on the process. Can someone please guide me in the right direction on how I can chieve that?
I'm relatively new to the process, so a detailed answer would be highly appreciated.
TIA.
Related
Where can I find existing or on going developed usecase/smart contracts on Corda?
I know about their website: https://explore.corda.zone/
But this link have very few usecases and those also with no documentation and some with no code/git links.
Is there any other repo/website where I can find the solutions developed on corda?
Thanks
https://explore.corda.zone is still very much a work in progress and new projects (with code) will be added to the library over the coming months.
In the meantime, you can check out https://www.corda.net/samples/ for sample CorDapps. The following repos also have additional sample CorDapps
https://github.com/roger3cev
https://github.com/CaisR3
https://github.com/JoelDudleyR3
I couldn't find such an example for "option" contract in Corda.
Could anybody please point me to such an example?
Thanks a lot
A set of sample CorDapps is provided here: https://www.corda.net/samples/.
One of these samples is an options CorDapp. See https://github.com/caisr3/cordapp-optionv1.
everyone.
I've been stuck for some days searching some way to get the skeleton of a point cloud data (like OBJ) but not using kinect. Is it possible?
I found the Point Cloud Library which does a lot of tasks related to point cloud data, and in their documentation there is a body keypoints detector, but it also works with kinect grabbers.
In my case, I have a point cloud data like in the picture, which was generated by another depth sensor scanner. Is it possible to find the key points in such data?
I really would appreciate any help. Thanks in advance.
Even if it's not explicitly mentioned in the tutorial you linked, a quick to the code suggests that you can use different data sources (e.g. PCD files), so you're not stuck with the live capture from Kinect.
All the tutorial code really does is the following:
Setup the GPU for the people parts detection.
Pick the appropriate data source.
Load the tree files for the body part detector.
Run the PeopleDetector on a single frame captured from the live grabber stream/PCD file.
I've added a new language in UD (Buryat) and now would like to parse it with SyntaxNet. Could you, please, tell how is it possible to do?
You can follow the instructions at
https://github.com/tensorflow/models/blob/master/syntaxnet/README.md#detailed-tutorial-building-an-nlp-pipeline-with-syntaxnet
on how to build an NLP pipeline with SyntaxNet. We might also be able to train and post a model for you.
I am a beginner to robotics, and I wanted to program a robot arm to draw a picture on arbitrary objects I present to him.
I do have an Intel Realsense camera, will receive a dobot.cc robot arm next days, and thought about using ROS as a base, moveit for movements and the PCL library for object detection.
How do I connect all of these together? Are there any particulary interesting tutorials that you would recommend? Anything I should try out up front?
Also, I suppose I will need to build custom code for detecting the target object in the point cloud and calculate how the picture should be placed on the object and then use moveit to follow the target path. Where would this code go?
Any help would be appreciated.
Thanks,
Gregor
Meanwhile, I found an excellent book on the topic:
http://www.amazon.de/Learning-ROS-Robotics-Programming-Second/dp/B00YSIL6VM/