I have enabled Flow on a JavaScript project I am developing. Since I am putting in the effort to providing type annotations, I would really like to generate *.d.ts files so the broader TypeScript community can also have type information.
How can I generate *.d.ts type definition files from Flow-annotated JavaScript?
I searched for the available tools. I found the following.
The first one is the most up to date one. It can convert the whole Flow code to TypeScript. I have used it personally, and it works like a charm.
https://github.com/Khan/flow-to-ts
Other ones:
https://github.com/Kiikurage/babel-plugin-flow-to-typescript
https://github.com/burnnat/flow-to-dts
https://github.com/bcherny/flow-to-typescript
https://github.com/yuya-takeyama/flow2dts
Related
I am trying to make changes to the behavior of a function and print the results to a file. The ViewCfg plug-in described in the Plug-in Development Guide does something similar, but I am trying to avoid having to use Ast.get, which ViewCfg uses. I am thinking of extending Printer.extensible_printer which, according to the Frama-C API Documentation, is something I can do if I want to obtain a custom pretty-printer.
However, if I extend the pretty-printer as described in the API docs, unless I'm doing something wrong, I notice that the changes I make take place regardless of which project is set as the current project. I'm using File.create_project_from_visitor to create a new project and Project.set_current to set the new project as the current project before I use the custom pretty-printer.
Is it true that any change made by a class that extends Printer.extensible_printer applies to all projects? I happen to be using frama-c-Aluminium-20160502, which I know is not the latest version.
EDIT: Sorry, I should have made this clearer in the beginning, but I'm not actually making changes to the behavior of a function. I'm reading in the behavior of a function, then based on that, I'm trying to generate as output valid C code that's meant to be read as input by another program.
Regarding AST.get, the only reason I was avoiding it was that it sounds like it gets the entire AST, while I'm only interested in part of it, i.e. behaviors. But if I'm just making things harder for myself by avoiding it, then I'll go ahead and use it.
well the question is, exist some app or language/etc to write a custom syntax to can check files?
You know, when we works in different places, ppl and projects every one have differents rules to how write, code style and all that things, the idea its can check all this things because at least to me normally i forgot something.
Ideally some app without a heavy GUI, thinking maybe a terminal app, or editors like gedit, avoid plis apps like Eclipse and similars.
For now i need only check simple parts, if you can recommend both a simple/limited app and a complex/full app would be great.
Obvs, if exist a simple/full app, will be better.
Thx.
If what you're looking for is a program that rewrites a source code given a specific coding style, I advise you to take a look at GNU Indent.
If you want to do more complex operations like build an AST and work on it to add things, edit, check for existing dependencies or whatever, you'll want to use a tool like Flex/Bison, Clang, Pyrser, etc.
I have a need to have a PDF document that has a very specific format. I have data that is in a Meteor 1.0 application stored in MongoDB. How can I use a LaTeX template in place of an HTML template?
I have a number of Meteor packages where I have repackaged JS libraries and written complete packages. I know how to do that.
I don't wish to use HTML for the output because I need the output to be very exacting. I can achieve that with LaTeX. What I am unsure of is how to use LaTeX as the template and inject data into the document before processing and ultimately printing.
Meteor is at 1.0.
On this version things changed a lot. You have to choose the packages that you want to use.
According to the documentation:
Since the parts of the stack integrate seamlessly, if you don't want to think about how it all works, you don't have to. You don't have to understand that the platform is made up of Blaze, Tracker, DDP, Livequery, and Isobuild, or how these pieces fit together. But if you want to dive in and learn how the parts work, you can do that too, because they are independent projects.
So you can code a meteor application without using Blaze. Check the meteor website for more information on this: https://www.meteor.com/projects
If you want to use Blaze (normal choise), you can get to know it better at https://docs.meteor.com/#/full/blaze
and you can adapt an existing LaTeX to HTML convert and create a template dynamically.
first of all, i will explain what i would like to do here : given a C big programm, i would like to output a list of producers/consumers for a data and a list of calling/called-by functions of the function where this data is.
for doing this, i am thinking about using what computes some modules of frama-c, like dataflow.ml or callgraph.ml in my own plugin.
however, as i read the plugin developper doc, i can't manage to see how we can have access to the data of those modules.
is a "open.cyl_type" sufficient here in my own plugin?
moreover, here are my other questions :
i tried using by the way pdg plugin for my purposes but when i call it and it says "pdg graph computed", how can i access it?
is there any more documented thing about "impact" plugin than the official webpage, in depth, how it works fondamentally? (i have to say that i'm in like a pre-project phase, and that i installed frama-c with the apt-get on ubuntu and that i did not get an impact plugin working (i'll see by compiling the sources))
by the way, do you think i'm using the right method to get to my purposes?
Your question is quite unclear, and this answer is thus very generic. As mentioned in the developer documentation, there are two main classes of plugins: static plugins, compiled with the kernel and whose API is exposed in a module (usually of the same name of the plugin) in Db. Dynamic plugins, such as Semantic_callgraph register dynamically their entry points through the Dynamic module.
If you do make doc in Frama-C sources (I'm not sure that there is a corresponding package in Ubuntu) you can access documentation for the Db module in FRAMAC_SOURCE_DIR/doc/code/html/Db.html and the list of functions registered by dynamic plugins in FRAMAC_SOURCE_DIR/doc/code/dynamic_plugins/Dynamic_plugins.html.
I think that, following Virgile's advice, you should get the source code anyway because you will most of the time need to browse the code to find what you are looking for. Beside, you can have a look at the hello_word plug-in (in src/dummy/hello_world) to have an example of a very simple plug-in. You can also find some examples on my web site at https://anne.pacalet.fr/Notes/doku.php?id=notes:0061_frama_c_scripts to find out how to have access to some information in the AST.
I have a rather big library with a significant set of APIs that I need to expose. In fact, I'd like to expose the whole thing. There is a lot of namespacing going on, like:
FooLibrary.Bar
FooLibrary.Qux.Rumps
FooLibrary.Qux.Scrooge
..
Basically, what I would like to do is make sure that the user can access that whole namespace. I have had a whole bunch of trouble with this, and I'm totally new to closure, so I thought I'd ask for some input.
First, I need closurebuilder.py to send the full list of files to the closure compiler. This doesn't seem supported: --namespace Foo does not include Foo.Bar. --input only allows a single file, not a directory. Nor can I simply send my list of files to the closure compiler directly, because my code is also requiring things like "goog.assers", so I do need the resolver.
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
This is my main issue.
However, later the closure compiler, with ADVANCED_OPTIMIZATIONS on, will optimize all these names away. Now I can fix that by adding "#export" all over the place, which I am not happy about, but should work. I suppose it would also be valid to use an extern here. Or I could simply disable advanced optimizations.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
Finally, for working in source mode, I need to do goog.require() for every namespace I am using. This is merely an inconvenience, though I am mentioning because it sort of related to my trouble above. I would prefer to be able to do:
goog.requireRecursively('FooLibrary')
in order to pull all the child namespaces as well; thus, recreating with a single command the environment that I have when I am using the compiled version of my library.
I feel like I am possibly misunderstanding some things, or how Closure is supposed to be used. I'd be interested in looking at other Closure-based libraries to see how they solve this.
You are discovering that Closure-compiler is built more for the end consumer and not as much for the library author.
If you are exporting basically everything, then you would be better off with SIMPLE_OPTIMIZATIONS. I would still highly encourage you to maintain compatibility of your library with ADVANCED_OPTIMIZATIONS so that users can compile the library source with their project.
First, I need closurebuilder.py to send the full list of files to the closure compiler. ...
In fact, the only solution I can see is having a FooLibrary.ExposeAPI JS file that #require's everything. Surely that can't be right?
You would need to specify an --root of your source folder and specify the namespaces of the leaf nodes of your file dependency tree. You may have better luck with the now deprecated CalcDeps.py script. I still use it for some projects.
What I can't do, apparently, is say "export FooLibrary.*". Wouldn't that make sense?
You can't do that because it only makes sense based on the final usage. You as the library writer wish to export everything, but perhaps a consumer of your library wishes to include the source (uncompiled) version and have more dead code elimination. Library authors are stuck in a kind of middle ground between SIMPLE and ADVANCED optimization levels.
What I have done for this case is maintain a separate exports file for my namespace that exports everything. When compiling a standalone version of my library for distribution, the exports file is included in the compilation. However I can still include the library source (without the exports) into a project and get full dead code elimination. The work/payoff balance of this though must be weighed against just using SIMPLE_OPTIMIZATIONS for the standalone library.
My GeolocationMarker library has an example of this strategy.