I'm wondering are there any best practices or recommendations out there on using Application Insights to monitor web jobs. At the moment I have all my app service and web jobs logs going to the one AI instance and there is alot of noise in there.
Specifically should I:
create a separate AI instance for all the web jobs
or create a separate AI instance per web job.
Thanks
You should generally use one instance for each system in your environment, but separate out dev, test, and prod. This is to make it simpler to track dependencies as jobs move throughout the system. So with multiple web apps, you might group an API, a separate web app that serves the front end content, and any web jobs that support those two apps. On another you may have just a single web app or web job that acts independently from the rest of your apps.
However, you should choose the number of Application Insights instances that best fit your situation. If it would work better for you to split each web job then you can certainly do that. You can query across App Insights instances, so you don't completely lose the ability to join the data from different services together if you choose to split them into separate App Insights instances.
Related
We have multiple different apps, each deployed in multiple environments, each monitored by a separate Application Insight instance. For example 2 web applications, each deployed in dev, test, prod environments, that means 6 different Application Insight instances.
Microsoft wants to migrate Application Insights to workspace based Application Insights, so I need to create Log Analytic workspace(s). What is the best approach and why:
Create single workspace and put all Application Insights into this one workspace?
Create separate workspace for each Application Insight instance?
Something other? (Workspace per application, workspace per environment...)
[I'm a part of Application Insights team]
Overall the recommendation is to keep the number of workspaces to a minimum unless you need clear separation:
Different auth for various workspaces [note, Application Insights scenarios leverage so called resource-based auth, i.e. you still will be controlling auth through Application Insights resources]
Different billing quotas
Different retention period
Different regions
Different environments
This allows you to manage less number of resources (workspaces).
So, you should make a decision based on #2 - #5 (as mentioned above - auth is not relevant because it will still be controlled by Application Insights).
If you're not using advanced features (different retention), then most likely the main driver is different environment. I.e. in your case it is probably 3 workspaces (dev, test, prod).
As far as I understood from the Application Insights documentation here (and here), I think it would be also a good practice for the Log Workspace to separate them (at least) by environment, but you could use any other split or group criteria such as business meaning, correlated data, RBAC policies, managing team...
IMHO, in your case I would create 3 workspaces (dev, test, prod) and link each Application Insights in its corresponding workspace.
I recently started a side-project. It was supposed to be a virtual recipe-book with the capabilities to store and retrieve recipes (CRUD), rate them and search through them. This is nothing new, but i wanted to build it as a desktop application to learn more about databases, unit testing, UIs and so on. Now that the core domain is pretty much done (i use a DDD approach) and i implemented most of the CRUD Repositories, i want to make this a bit more extensible by hosting the core functionality online, so i am able to write multiple backends (desktop application, web application, web api, etc).
Service Oriented Architecture (or Microservices) sound like a good approach to me to do that. The problem i am facing is how to decide, which parts of my project belong into a separate service and how to name them.
Take the following parts of the project:
Core domain (Aggregates, Entities, Value Objects, Logic) -> Java
Persistence (DAOs, Repositories, multiple Database backend implementations) -> Java
Search (Search Services which use SQL queries on the persistence DB for searching) -> Java
Desktop Application -> JS (Electron) or JavaFX
Web Application -> Flask or Rails
Web API (Manage, Rate, Search for recipes using REST) -> ?
My initial approach would be to put the core domain, the persistence, the search and the web api into a single sub-project and host that whole stack on Heroku or something similar. That way my clients could consume the web interface. The Desktop and Web apps would be different projects on their own. The Dektop app could share the core domain if they are both written in Java.
Is this a valid approach, or should i split the first service into smaller parts? How do you name these services?
Eric Evans on GOTO 2015 conference ( https://youtu.be/yPvef9R3k-M) and I 100% agree with him, answered to your question. Microservice scope should be one or maybe more Bounded Context(s). Including its supporting classes for persistence, REST/HTTP API, etc.
As I understood, the microservice is deployment wrapper over Bounded Context, with adding the isolation, scaling and resilient aspects.
As you wrote, you didn't apply Strategic Design to define bounded context. So its time to check, before tearing the app to parts.
I want my application to be in 2 phases. 1 part will simply fetch data in json format from an API and store it to a SQL database(or maybe a NO-SQL) and the other half(the web part) will read the data and implement customize alerts. So, basically i need to create a worker for the fetch process. But I'm confused between worker role and web role in Azure. Kindly help me what's the best possible way to implement this design?
You can just merge both in the same web role - the part of code running in IIS (the ASP.NET project created when you create a web role from a Visual Studio template) will handle web requests and the part running the "role entry point" will run the fetch process. Unless you absolutely need to scale them separately this will give you a simpler and more manageable solution.
Have you looked at this tutorial? It gives possible use cases and tutorials for both web and worker roles.
http://www.windowsazure.com/en-us/documentation/articles/cloud-services-dotnet-multi-tier-app-storage-1-overview/
Good day.
I'm wondering if the Enterprise Library Caching using isolated storage (disk, not DB) can be accessed by multiple apps in IIS? That is , can they all share the same instance of it.
I have various WCF services running on one machine, set up in different web apps (and potentially in different app pools, if that makes a difference). They all need access to a shared cache.
I had been told that this is possible with EntLib, but after doing some reading I'm not entirely sure this is the case. All of the services are running under NETWORK SERVICE user, but since they are all different apps in IIS does this prevent the sharing? I know having a different user certainly would.
So, can the same user use the same cache across multiple apps, or is it limited to within one app?
Any guidance would be appreciated!
If you want to share your cache across several services it would be better to go with App Fabric caching. See: http://msdn.microsoft.com/en-us/windowsserver/ee695849.aspx
I ended up not using EntLib for this and just used isolated storage.
In case anybody has the same problem, you can see the following question where I posted the code I used, as well as an issue I hit while using it plus the resolution.
Can't share isolated storage file between applications in different app pools
I have downloaded TheWorldsWorstStackOverflowClone. One of the project is called TheWorldWorsts.ApiWrapper, which basically is the core of accessing the API. There is a class called ApiProxy.cs, which has all the methods for the API call. This is good.
Now what I want to do is I am trying to collect data from this API interface and store it in a database. I know the limit to the API call is 10k per day. I.e: I want to be able to call the method in the ApiProxy class 10k times per day, done automatically. How can I do this?
The non-automatic way would be to create a dummy site where when every time I access the site it does all that process, but this not efficient. It seems that I have to write some kind of a scheduler by deploying a web service, but that is too complicated... as explained here. Any other simpler methods?
A Windows Service or Desktop App might be a better solution than a web application. You are not deploying a web service, you are consuming one using a proxy class, and this does not require you to have a web server or a web site.
You could use a web application to control and monitor progress as your service downloads data, but the actual work is long running and needs to be offloaded to another process or thread so you can tell the user whats going on.
Check out this one
http://stacky.codeplex.com/
This looks what you need, though I am facing some debugging issues, but hope you can figure it out.