Input parameters when generating HTML file? - aglio

I'm currently working with aglio to generate API specs for a few different service groups. The issue I'm facing now is that I want to deploy the spec to multiple environments (for different consumer groups), so the displayed base URLs need to be different.
Is there any way to either send in a base URL for each HTML file generation?

You could make use of the include feature <!-- include(OtherFile.md) -->.
In my case, I use a number of files:
one toplevel file (e.g. V1.md), containing the Metadata and an overall introduction to the API
one file per resource (e.g. AuthResource.md, UserResource.md), where I document only the functionality of that resource
The top-level file has an include-statement for each resource-file. In Aglio, I select only the top-level file.
In your case, you might be able to use customer-specific or environment-specific files before the top-level file. Move the Metadata to the customer-specific files and have them include the top-level file. Then you can render each customer-specific file into HTML.

Related

What's the difference between the include and import statement in NETCONF (.Yin/Yang files)

I understand that you can create a separate yang file (Something like a textual Convention to store syntax values for MIBS)and import it into another yang file to make the data more organised and structured, but I can't seem to understand what the include statement does differently?
Does it "import" the entire file into the file that's including it - and if so would this be read before the file including it...?
Please help :)
YANG relies heavily on a concept known as "namespaces", which stems from XML naming conventions. Each namespace has a unique resource identifier and allows definitions (in different namespaces) to have same names at same definition levels while avoiding name clashes. When you define a YANG module, you are actually defining a namespace.
An import statement is used to access definitions from a foreign namespace (another module), while an include statement introduces a mechanism that allows a single namespace (single module) to be logically split into several files, conveniently named module and submodules. For includes, there is always exactly one module file, which includes all submodule files that belong to it. A submodule may only belong to a single module and may not be imported (directly). To an importing module, a module that includes submodules, looks like a single entity. Submodules may include eachother, but with YANG version 1.1, this has become unnecessary, since a submodule immediately gains access to all definitions in all submodules and the module they all belong to. In YANG version 1 you had to explicitly include a submodule to use definitions from it in another submodule, while never being able to access definitions in the module to which they belonged.
An import does not "inline" definitions into the importing module, while an include does exactly that. Importing a module gives you access to its top-level definitions (typedefs, groupings, idenitites, features and extensions) and allows you to use schema node identifiers that identify nodes in the imported module (for the purpose of augmentation and deviation, for example).
Definitions from a foreign namespace are always accessed via a prefix, which are part of an import statement's definition. Definitions that come from includes do not need to be prefixed when used, and if they are, are prefixed with the including module's (or submodule's) prefix.
YANG "compilers" usually process these files when they hit either an import or an include statement. They need to process them in order to be able to resolve definitions in body statements of the defining module. That is why these statements are required to appear in a module's header section.
There is an entire section in YANG specification dedicated to modules and submodules, where you can read more on the topic.

Possible to use .zip file with multiple .csv files?

Is it possible using U-SQL to unzip a zip folder with multiple .csv files and process them?
Each file has a different schema.
So you've got two problems here.
Extract from a ZIP file.
Deal with inner varying contents.
To answer your question. Is it possible?... Yes.
How?... You'd need to write a user defined extractor to do it.
First check out the MSDN extractors page:
https://msdn.microsoft.com/en-us/library/azure/mt621320.aspx
The class for the extractor needs to inherit from IExtractor with methods that iterate over the archive contents.
Then to output each inner file in turn pass a file name to the extractor so you can define the columns for each dataset.
Source: https://ryansimpson.net/2016/10/15/query-zipfile-adla/
Another option would be to use Azure Data Factory to perform the UnZip operation in a custom activity and output the CSV contents to ADL Store. This would involve some more engineering though and an Azure Batch Service.
Hope this helps.

Visualization of xsd Dependencies?

I have a bunch of XSD Files which I did not write myself. The files sometimes import each other:
<xs:import namespace="http://www.mysite.com/xmlns/xXX-YYYY/V" schemaLocation="http://www.mysite.com/xmlns/xXX-YYYY/V/schema_A.xsd"/>
and I would like to get an overview of the dependencies without having to read through all of them.
The URI specified by schemaLocation does not exist, instead a catalog.xml File is used to resolve the schema locations.
http://de.wikipedia.org/wiki/XML_Catalogs
Can anybody recommend a tool that can visualize the dependencies of my schemas by also processing the information given in the catalog.xml file?
Thanks
Mischa
To follow up on my comment...
I am not aware of any tool that takes into account OASIS catalog files. Have a look at this response, see if it supports what you need (and your platform).
Strictly speaking, there are a number of issues with dependencies diagrams, which is why such a question should be qualified with why do you want it.
Some think that it truly shows dependencies between XSD files; it is not true: it may show what the author thinks the dependencies are, but that wouldn't be what the processor actually agrees to. "schemaLocation" is just a hint that processors may or may not use: "may not" use it if they're instructed otherwise (well known XSDs could be cached internally, through catalog entries or any other proprietary "catalogs"), or because the processor may decide there is no need to load an external reference when there's no use for it anyway (it may happen in some corner cases).
A diagram built as described by explicit schema locations is definitely easier to do. It only shows what the author intended; it doesn't mean that it is the "real one" (as in content is pulled indirectly, which makes the whole XSD set valid, while individual XSDs, open independently of the set, would be invalid).
Trying to build a diagram where dangling or non existing schemaLocation are overridden through a catalog, is way harder, due to the multitude of ways to structure the content, and the resolution mechanism. It would have the same shortcoming as the one above (except now the author is the one of the catalog file, rather than who authored the XSDs).
The "true" dependency can be built by traversing a schema set already loaded and compiled. Even then, you would still need to define criteria regarding dependencies due to substitutable components (elements in substitution groups or derived types, through the use of the xsi:type attribute). That is even harder.
Take a look at this tool: DocFlex/XML XSDDoc.
It is an XML schema documentation generator.
It doesn't visualize xsd dependencies, but it does work with XML catalogs.
The overview of each XSD file lists all other XSD files referenced from it
(i.e. imported, included or redefined).
There is also an opposite list of those schemas that reference the given one.
So, you can use it to figure out which XSD files depend on which.
At least, that will be easier than reading raw XSD files.
As an example, here is a documentation generated with that tool:
XML Schemas for DITA 1.1. It has been generated basically by two files:
http://docs.oasis-open.org/dita/v1.1/OS/schema/ditaarch.xsd
http://docs.oasis-open.org/dita/v1.1/OS/schema/catalog.xml
ditaarch.xsd is the schema driver that pulls all other schemas (25 in total); catalog.xml is the XML catalog, via which all file references are resolved.
What is specified in schemaLocation attributes in those schemas themselves are just opaque URIs.

VS2010 Local Resource Suffix Issue

Resource file generated from Tools--> Generate Local Resources creates respective keys having the suffix "Resource1".
Is there a way to get rid of the suffix "Resource1" and make it use the exact control name for the resource key?
It's described in this issue. The Resource suffix is to help prevent name clashes between controls. Without it, it would break in some circumstances.
Is it purely the code generation you want to customize? You could always use a Custom Resource Manager to remap the resource keys to your own convention (without suffix). It does mean creating your own implementation to pull out the resources created from RESX, but I've done it this in the past with some help (copy/paste) from reflector.
It would allow you to use shortcuts (no suffix) in your syntax when referring to resources, but it wouldn't affect the code gen side of things. A find and replace fixes that, or a custom tool.

Extract zip file in flex

Is there any method to extract zip files and maintain the same folder structure in the output folder.I am able to extract the zip file and its inner files but not able to extract folder from a zip file and thus fail to maintain folder structure also.
Yes.
There is an AS3 component that allows you to read and write data from zip files, and the demo also shows that it is possible to see the folder structure.
http://www.nochump.com/blog/?p=15
I have not used this component myself, but if I am correct in assuming that you are making an AIR application then this component may automatically generate folders, otherwise you can use the file system api to create the correct folders yourself.
Good Luck!
You may also want to check out fzip:
http://codeazur.com.br/lab/fzip/

Resources