I am not sure that this is the appropriate forum but we are having some issues with the recommended CI/CD flow for Azure Datafactory which is requiring us to create our own script to deploy ADF resources using the ADF REST APIs instead of auto-generated ARM templates.
Before undertaking this work, we wanted to clarify a few assumptions about the ARM template deployment for ADF resources. Are all of the assumptions true?
ARM template deployment for ADF resources simply calls the ADF REST APIs to deploy resources and has the same limitations as calling the REST APIs ourselves?
ARM template deployment for ADF does not perform any optimizations before calling the REST APIs such as reading the current definition of resources before writing and only writing if the definition has changed.
Are there any other ARM limitations or optimizations that we should be aware of in order to make sure that our performance is as optimal as ARM?
As we know An ARM template is a JavaScript Object Notation (JSON) file that defines the infrastructure and configuration for your project. The template uses declarative syntax. In declarative syntax, you describe your intended deployment without writing the sequence of programming commands to create the deployment.
ARM has its own benefits over REST APIs such as Incremental and complete deployments which are not available when using the REST APIs.
David Gaspard: Coming to your ask, ARM Templet is collection of JSON defines where you can include multiple REST APIs calls in single module and use it for deployment. You can re-use. It also allows you to use Variables. It dependence on what best suits for your requirement.
You can use incremental deployment in ARM which can take care of Optimization.
Few reference that might help.
https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/deploy-rest
related discussion : Azure ARM Templates and REST API
ARM Limitations : https://learn.microsoft.com/en-us/azure/azure-resource-manager/templates/best-practices
Related
I'm starting to develop Azure blueprints and I can see that structure of ARM template is different compared to one used in ARM deployment. I like to modularize code and trying to figure out how I can properly develop individual ARM templates and then incorporate into final blueprint properly. As of right now instead of directly putting ARM artifact into blueprint (along with 100 others) I just manually debug ARM template and then cut in paste into artifact. I'm wondering if there is more effective way doing that or I missing something? Based on documentation it seems to be suggested directly incorporate templates into artifacts then deploy/publish/assign blueprint which takes way to much when you just need to work on single ARM template
An effective / dynamic / automated way can be accomplished by leveraging this Blueprints as Code repository for management and dynamic way of configuring lifecycle of your blueprints which helps in reducing the effort compared to Portal way of managing the blueprints.
Other related references:
Functions for use with Blueprints
Blueprints REST API Reference
Blueprints Az PowerShell Reference
I recently started a side-project. It was supposed to be a virtual recipe-book with the capabilities to store and retrieve recipes (CRUD), rate them and search through them. This is nothing new, but i wanted to build it as a desktop application to learn more about databases, unit testing, UIs and so on. Now that the core domain is pretty much done (i use a DDD approach) and i implemented most of the CRUD Repositories, i want to make this a bit more extensible by hosting the core functionality online, so i am able to write multiple backends (desktop application, web application, web api, etc).
Service Oriented Architecture (or Microservices) sound like a good approach to me to do that. The problem i am facing is how to decide, which parts of my project belong into a separate service and how to name them.
Take the following parts of the project:
Core domain (Aggregates, Entities, Value Objects, Logic) -> Java
Persistence (DAOs, Repositories, multiple Database backend implementations) -> Java
Search (Search Services which use SQL queries on the persistence DB for searching) -> Java
Desktop Application -> JS (Electron) or JavaFX
Web Application -> Flask or Rails
Web API (Manage, Rate, Search for recipes using REST) -> ?
My initial approach would be to put the core domain, the persistence, the search and the web api into a single sub-project and host that whole stack on Heroku or something similar. That way my clients could consume the web interface. The Desktop and Web apps would be different projects on their own. The Dektop app could share the core domain if they are both written in Java.
Is this a valid approach, or should i split the first service into smaller parts? How do you name these services?
Eric Evans on GOTO 2015 conference ( https://youtu.be/yPvef9R3k-M) and I 100% agree with him, answered to your question. Microservice scope should be one or maybe more Bounded Context(s). Including its supporting classes for persistence, REST/HTTP API, etc.
As I understood, the microservice is deployment wrapper over Bounded Context, with adding the isolation, scaling and resilient aspects.
As you wrote, you didn't apply Strategic Design to define bounded context. So its time to check, before tearing the app to parts.
To everyone that took their time, to read my question, I want to point out, that I'm writing Integration-tests NOT Unit-tests.
Using the definition of Integration-test, provided by sites(that are at the bottom of the question):
Integration tests do not use mock objects to substitute
implementations for service dependencies. Instead, integration tests
rely on the application's services and components. The goal of
integration tests is to exercise the functionality of the application
in its normal run-time environment.
My question is what is the best practice on writing integration test for ASP.net web API. At the moment I'm using the in memory host approach provided by Filip. W. blog post.
My second question is, how do you ensure, that your test data is there and is correct, when you're not not Mocking(msdn and other sites clearly say, that integration test do not mock databases). The internet is filled with examples of how to write extremely simple integration tests, but has zero examples for more complex api(anything that goes further than returning 1)
Reference Sites:
https://msdn.microsoft.com/en-us/library/ff647876.aspx
https://msdn.microsoft.com/en-us/library/vstudio/hh323698(v=vs.100).aspx
http://www.codeproject.com/Articles/44276/Unit-Testing-and-Integration-Testing-in-Business-A
http://blog.stevensanderson.com/2009/06/11/integration-testing-your-aspnet-mvc-application/
Filip. W. In-Memory-Hosting:
http://www.strathweb.com/2012/06/asp-net-web-api-integration-testing-with-in-memory-hosting/
Have you seen my answer over at this other SO question here . I will pad this out with the additional information below.
In our release pipeline (using Visual Studio Release manager 2013) we provision a nightly integration database from a known test script by creating the database from scratch (all scripted) - initially we cloned production but as data grew this was too time consuming as part of the nightly integration build. After the db is provisioned we do the same with the integration VM web-servers and deploy the latest build to that environment. After these come up we run our unit tests again from command line as part of the release pipeline this time including the tests decorated with the custom action filter I descried in the answer linked.
can any one explain java callout a little help will do.Actually i am having several doubts regarding where to add the expressions and message flow jar and where to add my custom jar.
Can i access the resources/java folder directly and can i use it to store my data?
First, check the docs on apigee at
Customize an API using Java
http://apigee.com/docs/api-services/content/customize-api-using-java
Keep in mind Java Callouts are only supported in the paid, Apigee Edge product, not the free Developer platform.
As you decide how to use Java, you should consider this basic hierarchy of policy management:
Policy Configuration First: Apigee policy configurations are in broad use and therefore tested daily by clients and most performant.
Javascript Callout: For stuff you can't do in a standard policy there is Javascript -- keep in mind this is "Compiled Javascript" which means at the time you deploy your project the JS gets interpreted by the Java Rhino engine and then runs like native code. Very fast, very scalable, and very easy to manage as your code is all in plain text files.
Java: You have to have a pretty compelling reason to use Java. Most common cases are where you have some complex connection that needs to be negotiated with custom encryption schemas or manipulating binary content. While perfomant, it's the most difficult code to manage (you upload compiled jars, so if someone takes over your work, the source code is in a separate place than your deployment bundle), and it's the most difficult to debug in the event of a failure.
To your specific question: All Apigee variables are available in Java and Java gives you pretty much god-like powers on the local server where the code is executed. Keep in mind, Apigee's physical architecture is distributed -- your jar may run on different servers for different API calls, so any persistent data (that you might want to store locally) should really be put into Key Value Map and read as needed. Keep your API development as stateless as possible.
Hope that helps.
I want to create web application that is having some functionality to be platform independent.For that i want to create Java API, but I am confused that is the same thing can be done using jar implementing that functionality?
That are two completely different things and cannot be compared as you did.
An API is an Application Programming Interface, so it defines the methods you can use. (wikipedia link)
A JAR is a Java Archive, it is just a packaged Java Application. (wikipedia link)
An API, by definition, it's just an interface your application/library exposes to other applications to take advantage of your functionality. It doesn't impose any way of implementing it and there is no such a thing as a Java API (unless you mean you have multiple APIs for different programming languages in the form of wrappers). You can build your API just using a regular Java interface, and then pack that in a JAR other Java applications can import and use.
A rest API written in Java will return a JSON payload which can then be read and used by an application written in any programming language. This is the advantage of web services.
A jar will return Java objects as a payload, so is optimal to use if you will be using it exclusively within other Java applications.