I'm writing a semantic analysis application written on C++ that is internally based on syntactic parses of sentences. SyntaxNet is used to provide required dependency trees, and it works quite well.
The only thing is that I have to call SyntaxNet as an external application with the following calling for every sentence that my application handles:
system("./syntaxnet/demo.sh");
I notice a remarkable time expenses with such method of using SyntaxNet and would like to know if it is possible to use SyntaxNet as a library with some programming language (preferably, C++) API.
what about using Serving API?
https://github.com/dmansfield/parsey-mcparseface-api/issues/1
you can modify parsey_api.cc for loading exported model and parsing.
for python user, python client code
https://github.com/dsindex/syntaxnet/blob/master/README_api.md
Related
I've developed a REST API back end using Endpoints-Proto-Datastore, which wraps the Cloud Endpoints Python API. I'm starting to look at Qt and trying to get an idea what will be involved in accessing my API from the Qt networking or other library. Might it be nearly as straightforward as is making the calls from the command line using the Python Client library, which even handles OAuth2 flows? This would be very nice. I might use PyQt if this makes things simpler.
Your Endpoints service can generate an OpenAPI specification file which describes the API. Once you do this, there are many OpenAPI-compatible packages which can generate client code for you.
I located this document which gives a pretty good overview for my purposes:
"The Google APIs Client Library for C++ will automatically take care of many of the tedious details for interpreting and complying with the discovery documents so that you can write simpler and familiar C++ code."
Now it's a matter of building and installing the C++ client and then figuring out how to generate the client library and access it from a Qt application. But that is beyond the scope of this question.
Hi I am looking for predefined (common) step definitions for Meteor-cucumber\chimp.
I used PHP's Behat (BDD cucumber framework). There is this extensions and this class. Which allows you to have a common step definitions out of the box. You don't need to write those step definitions by yourself.
Down below it is the list of step definitions you got from Behat.
Short Answer
This sort of step-def library doesn't exist and we (the authors of Chimp) won't be adding them because we have seen they are very harmful in the long run.
It looks like you are wanting to write test scripts, in which case, you would be better off using Chimp with Mocha + Customer WebdriverIO commands and not Cucumber to write these.
Long Answer
Features files with plain language scenarios and steps are intended to discover and express the domain of your application. The natural freeform text encourages you to use language that you can use with the entire team - otherwise known as the ubiquitous domain language.
You are wanting to make one of the most common mistakes when it comes to Cucumber, and that is to use it as a UI testing tool. Using UI based steps breaks the ubiquitous language principle.
The step reuse should be around the business domain so that you create a ubiquitous domain language. If you use UI steps instead of specs, you end up creating technical debt without knowing it. Gherkin syntax is not easy to refactor and if you change your step implementations, you need to update in multiple places. For domain concerns, this is usually not a big issue, but for UI tests, it's likely you will heavily reuse steps.
It sounds like you are interested in good code reuse. If you think about it, WebdriverIO already has a great API and most of the steps you are wanting to use would just be wrappers around the API.
Rather than create this extraneous translation, you should just Mocha to write the tests and access WebdriverIO's API directly. This way, you have the full JavaScript language to employ some software engineering practices instead of the simplistic Gherkin parser.
WebdriverIO also has a great custom commands command that allows you to create all of the methods you have mentioned above. An extension file that adds a ton of these scripts would be VERY useful.
We have written a repository with best practices and some do's and don'ts lessons. In particular, you should see:
Lesson #1: Test Scripts !== Executable Specifications
Lesson #2: Say No To Natural Language Test Scripts
You might also want to read:
Aslak's view of BDD
BDD Tool Cucumber is Not a Testing Tool
To test my UI I will use Mocha. I don't need cucumber specs.
As a task runner I will use Chimp (Chimp uses webdriver.io).
Here is quick Mocha+Chimp how to.
Does thrift provide a way to inspect struct fields at runtime?
My use case is with C# but the question is regarding the standard Thrift API.
There is no standard thrift API across languages so what you can do beyond serialization is highly language dependent. If you can't accomplish what you want using just reflection, examine the code that is generated by the thrift compiler for the thrift object you are interested in. I've not seen C# thrift generated code but it may contain additional data that could be useful to you.
I am very familiar with the Java implementation and I can tell you that using thrift with Java there is no need to use reflection at all. Every thrift generated class contains information that allows the deserializer to reconstruct the class from field id numbers. The java thrift compiler creates static members and methods which contain pretty much everything you would ever want. For Java it is actually better than reflection because it contains the element types for lists/maps/sets.
Now there is no guarantee that the formats of this data won't change in future versions of thrift but given that all of the various protocols depend on them the 'hidden' API should be fairly stable.
If you have access to the IDL at runtime you could use a parser for the IDL and infer the generated fields that way.
I'm not an expert in C# but you could maybe link to the native libparse library used in the Thrift executable (I'm not sure if the parse library is generic enough to use like that, I'm just assuming).
Alternatively you could use the parser from Facebook's Swift (https://github.com/facebook/swift/tree/master/swift-idl-parser, or download the JAR from http://central.maven.org/maven2/com/facebook/swift/swift-idl-parser/0.13.2/swift-idl-parser-0.13.2.jar). This is probably easier or better for your case IMO, even though it is a Java library I think it should convert just fine to CLR using IKVM.net.
A third stupidly simple and hackish way to do this would be to use the Thrift HTML generator to generate HTML documentation and parse that using regex or run it through HTML Tidy and parse it as XML
Recently I noticed some classes in Qt which is called Qt script module and according to documentation it's used to make an application scriptable! here is my questions :
What does it mean? making an application Scriptable?
And when should we use it?
Thanks in advance
What scripting is
~~~~~~~~~~~~~~~~~
Most of the super huge s/w come with lots of features. And quiet interestingly many of the new features that are added are the combinations of basic existing features. But one cant keep on adding new C++ code to create a simple feature...they can just write a script interactively which performs the existing operations in a tandem process and does the job of new feature.
Best examples..Blender(Python scripting). If can look in this scenario.. Blender has 1000s of features. Most of them are actually scripted features calling the existing features in an orderly fashion.
QtScript
~~~~~~~~
This module of Qt framework provides a javascript interpreter(Google v8 js engine) at your disposal. You can call your QObject classes and related methods from javascript as it they were native functions of js(Only in you application). QScriptable classes expose the internals of your c++ QObject's properties and methods to javascript engine.
When To USe
~~~~~~~~~~~
When you have a huge application with lots of modules, and you want to retain the programmability of your application even after compiling it into machine code, then you have to use scripting.
Google's Closure Library looks like it has a lot of great features, but I'm not seeing any examples of it used with ASP.NET sites. I'm just wondering if anyone has any experience using the two together and what parts. Is is a good or bad experience?
EDIT:
To clarify, I'm asking about Closure Library and not Closure Compiler or Closure Templates. For example, if I use the calendar control from Closure Library, it seems to decorate a text box so that text might have to be converted to a DateTime on post back. Whereas other ASP.NET controls will expose a SelectedDate property, for example. There are probably some cases where this incomplete integration is annoying, and probably some cases where the controls in the Library provide features that are compelling enough that it is worth dealing with any quirks.
Closure Library is platform agnostic. It is as useful with
ASP.NET as it is with any other platform.
Closure Templates (another member of the Closure Tools family),
does have a server-side component that's limited to
Java. However, that does not limit the utility of Closure Library
in any way.
For any production application using Closure Library, you will
need to compile your code using Closure Compiler. To do this
locally, you will need to install both Python and Java. Neither
of these are needed in your deployment environment, though.
With Microsoft providing their own JavaScript minifier ( http://aspnet.codeplex.com/releases/view/34488 ) and embracing and supporting JQuery (+ intellisense and documentation) in Visual Studio I am not surprised ASP.NET folks are skipping Google's Closure.