We have begun creating our AppInsights resources via an AzureRM template. But there does not appear to be a way to disable the "Email detections to co-admins" option, so we still have to manually navigate through the portal to disable this option.
As a co-admin with approximately 30 AI resources now (multiple environments), the daily emails are becoming painful.
I would like to know how to turn off this option in a script (preferably in the template json file).
Currently, Application Insights doesn't support this action through PowerShell; it is on the product backlog for a future release.
Related
I had loaded Custom Metrics using App Insights API to Application Insights (which is workspace-based). I want to enrich the custom metrics with tables in logs.However i am not able to see the custom logs in the Log Analytics workspace or in the custom metrics explorer.
Is there a workaround for doing this.
I can create charts using the metrics loaded in Application Insights
Few of the reasons why custom metrics are not being logged were:
There can be couple of minutes or less latency in Application Insights Pipeline itself. Please check this AzureMonitorStatus Blog contains most AI Issues with their resolutions at one place (tech community category).
When new custom property or new custom metric is added, it might take for that new field to show up as a field in metadata that the charts use to build themselves, depending on timing.
If it is a new application, to display that new custom property/metric in metadata, it can take up to ~15 minutes (maximum) and sometimes less than that.
Once they are available in Metadata, might need to refresh the portal if you're already in the Metrics Explorer window for that AI resource for it to re-request metadata to see your field (normally just the "refresh" command on Metrics Explorer or an Overview blade is good enough to get that working but doing a full refresh in the browser works as last resort).
I've been reading about how nuxt can generate a static site when a client makes a request to view the website. We are planning to build a headless cms to migrate the database with the data the website needs. This data will only be changed when you save it in the headless cms.
My question is since this data will only change when it is changed in the headless cms. Isn't it possible to just generate the site when it is modified from the headless cms, and then serve that site to the client? To reduce server costs.
Is it possible to do this with nuxt? Or are there any possibilities to do this?
We are planning on using Firebase as a backend.
There's nothing explicitly preventing Nuxt from being rebuilt each time you change an item in your DB. The part that matters is how you tell your app to rebuild itself.
By far the simplest way is using some sort of "build hook". See Netlifys docs here for a quick overview of what they are. However, this only really works if you're using a headless CMS that can send hooks on save, and a build provider that can trigger builds using those hooks.
You will absolutely save on server costs using this sort of method, but beware: if you have a lot of user generated content triggering builds, your build cost can easily outweigh the server costs. You also need to be aware that builds generally take a few minutes, so you won't see instant changes on your site.
The other option you have is foregoing static site generation in favour of SSR, which can dynamically load and render your content, completely avoiding the need to build every time a new DB change is made. This is what I'd consider the best alternative if you do indeed have a lot of user generated content.
It's hard to give any further advice without knowing the specifics of the CMS or build provider though.
I have developed an ExtJS 5 + .NET MVC WebAPI RIA for reporting purposes.
Now the client is requesting a feature to subscribe to some reports. The reports (pdf) should be generated automatically and then the server should send them per mail to the user who subscribed a report. It would also be a nice to have that the user can specify the date and time when he will receive the report.
Currently the application has already a PDF export where the user can save these reports. In this case the applications sends the html of the report section to the server and the server is working with wkhtmltopdf to generate the pdf.
For my new feature i have the following questions:
Can i implement this new feature service in my WebAPI (e.g. as a thread which ill start on startup) or should i write an independet service for it.
Would it be appropriate to just load the site on the server with the reports and execute the process which i am already running for my PDF export to get the pdf out of it?
I am thankful for every advice.
A separate process that calls the WebApi makes a lot of sense. Separation of concerns and all that. But whether you include it in the API itself or in a separate mailer portal, I suggest you don't try to write any scheduling yourself. Scott Hanselman has a great post on why you shouldn't and suggests several alternatives. Of those, my favorite solution is Hangfire. I use it in production, and it's pretty easy to set up and use.
And if you need nicely formatted emails, I suggest checking out Postal for their composition.
We are upgrading to Tridion 2011 SP1 and as a part of Tridion search implementation we are using FS4SP (Fast Search for sharepoint 2010).
In proposed implemenatation search environement consists of following servers:
FAS4SP
FISE
Can someone guide us regarding how to push content to FAST from tridion and how to retrieve the same?
(Here due to some reasons we are not considering crawling of website by FAST)
What all APIs can be used for this implementation?
If you don't want to use the crawling approach, you will need to create a custom deployer, please take a look at this other article:
How can we integrate Microsoft FAST with SDL Tridion 2011 SP1?
Alternatively, if you don't have a development team who is familiar with Java, you might considering creating a .NET application which updates your FAST index based on either a File System or Database trigger when your pages or components are published, updated or deleted from your broker repository.
You will probably want to create XML for FAST and have the Custom Deployer (or Event System) send the content to FAST.
First create the FAST XML that works and write a sample app so you can insert it into the FAST index from either a .NET or Java application. This does not yet involve Tridion.
Then write your Custom Deployer or Event System and pass the XML to FAST.
IF you are using a Custom Deployer approach I would suggest to contact Tridion Professional Services if you have not done it yourself or are not a Java programmer. The new Tridion 2011 Storage API provides new opportunities for the Custom Deployer. In the meantime I would suggest to append the FAST XML to the normal Page Content at the end, surrounded by some markers, and have your custom deployer pull it out of the Page output, send to FAST, then remove from the output before continuing.
This is a fairly difficult challenge for those who do not have serious Content Delivery / Deployer / Java skills. However, if you want to go for it yourself I would suggest taking at least 2 weeks of time to research existing solutions and experiment with the API.
Using the Event System might be a little easier - but your success or failure message will not appear in the Publish Queue and if the search index fails to update you can only log the failure and not pass the info back to users.
I am currently working on a school assignment which requires us to perform security testing on a website created by one of our peers. The website is created using ASP.Net 3.5/4 and an MS-SQL database.
The website's main features are:
Registration & Login using Roles
Uploading documents
Sharing of uploaded documents
Leaving comments on shared documents
I already have started testing the website using:
XSS in the Register, Login and Leave Comment Sections
SQL Injection in the Register and Login pages
Upload of executables, with a different extension (I have changed an executable file to .doc to test whether the system is checking the extension of the file or the actual contents)
These tests have been carried out manually and I have access to the source code!
Can you suggest any other tests I might want to carry out?
Cheers
A good resource for things to lock-down would be OWASP - I linked to their "top ten" items as I have followed it myself for locking down apps and found it really helpful.
Drilling down into any item on their top ten list will discuss how to recognize a particular vulnerability and suggest how to remove the vulnerability. All code-agnostic stuff, high-level descriptions so it can be applied to any project be it .Net, Ruby, PHP, etc.
Check for Local File Inclusion and Remote File Inclusion vulnerabilities as well.
You can also check the login system: If the website lets you login (and you have an account or can make one), login and check to see how the login code works (i.e. check your cookies to see if they are PHP sessions [secure] or some other method [usually not secure]). If you find a vulnerability in the login system, you could elevate your privileges from regular user to admin.
Also, "Upload of executables, with a different extension." Could you clarify that for me?
The best thing to do is to use your imagination.
You should also use Cat.NET's engine (which is a free Microsoft provide security focused static analysis tool).
I have been working on making Cat.NET easier and faster to use inside VisualStudio and here is a pretty cool PoC of how it in action: Real-time Vulnerability Creation Feedback inside VisualStudio (with Greens and Reds)
If you are interested in Cat.NET you can download it from http://www.microsoft.com/en-us/download/details.aspx?id=19968