Turn series to columns in Kusto/Azure Data Explorer - azure-data-explorer

I am trying to turn a Windows event log xml event data in Azure Logs (kusto) into columns, so given the EventData array in the xml as returned by parse_xml(),how do I turn it into columns?
I tried mvexplode which gave me rows (series), but then I would like to turn those into columns where col name is the attribute "Name" in the tag and value is the text property.
Windows event log xml below for reference
<EventData xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<Data Name="DomainPolicyChanged">Password Policy</Data>
<Data Name="DomainName">XXX</Data>
<Data Name="DomainSid">S-1-5-21-....</Data>
<Data Name="SubjectUserSid">S-1-5-18</Data>
<Data Name="SubjectUserName">SRV-XX-001$</Data>
<Data Name="SubjectDomainName">DOMAIN</Data>
<Data Name="SubjectLogonId">0x3e7</Data>
<Data Name="PrivilegeList">-</Data>
<Data Name="MinPasswordAge"></Data>
<Data Name="MaxPasswordAge"></Data>
<Data Name="ForceLogoff"></Data>
<Data Name="LockoutThreshold">耠</Data>
<Data Name="LockoutObservationWindow"></Data>
<Data Name="LockoutDuration"></Data>
<Data Name="PasswordProperties">耠-</Data>
<Data Name="MinPasswordLength">-</Data>
<Data Name="PasswordHistoryLength">-</Data>
<Data Name="MachineAccountQuota">-</Data>
<Data Name="MixedDomainMode">1</Data>
<Data Name="DomainBehaviorVersion">8</Data>
<Data Name="OemInformation">12</Data>
</EventData>

The following approach could work
(depending on how you actually plan to query the data, there could be a more efficient way - so if you could share a sample query, it could be helpful)
datatable(someColumn:string, xmlValue:string)
["hello", '<EventData xmlns="http://schemas.microsoft.com/win/2004/08/events/event">\r\n'
'<Data Name="DomainBehaviorVersion">8</Data>\r\n'
'<Data Name="OemInformation">12</Data>\r\n'
'<Data Name="DomainPolicyChanged">Password Policy</Data>\r\n'
'<Data Name="DomainName">XXX</Data>\r\n'
'<Data Name="DomainSid">S-1-5-21-....</Data>\r\n'
'<Data Name="SubjectUserSid">S-1-5-18</Data>\r\n'
'<Data Name="SubjectUserName">SRV-XX-001$</Data>\r\n'
'<Data Name="SubjectDomainName">DOMAIN</Data>\r\n'
'<Data Name="SubjectLogonId">0x3e7</Data>\r\n'
'<Data Name="PrivilegeList">-</Data>\r\n'
'<Data Name="MinPasswordAge"></Data>\r\n'
'<Data Name="MaxPasswordAge"></Data>\r\n'
'<Data Name="ForceLogoff"></Data>\r\n'
'<Data Name="LockoutThreshold">耠</Data>\r\n'
'<Data Name="LockoutObservationWindow"></Data>\r\n'
'<Data Name="LockoutDuration"></Data>\r\n'
'<Data Name="PasswordProperties">耠-</Data>\r\n'
'<Data Name="MinPasswordLength">-</Data>\r\n'
'<Data Name="PasswordHistoryLength">-</Data>\r\n'
'<Data Name="MachineAccountQuota">-</Data>\r\n'
'<Data Name="MixedDomainMode">1</Data>\r\n'
'</EventData>',
"world", '<EventData xmlns="http://schemas.microsoft.com/win/2004/08/events/event">\r\n'
'<Data Name="DomainBehaviorVersion">876543</Data>\r\n'
'<Data Name="OemInformation">12345</Data>\r\n'
'</EventData>'
]
| extend parsed = parse_xml(xmlValue).EventData.Data
| mvexpand parsed
| summarize d = make_bag(pack(tostring(parsed['#Name']), parsed['#text'])) by someColumn
| evaluate bag_unpack(d)
Docs for operators/functions used in this example:
datatable operator: https://learn.microsoft.com/en-us/azure/kusto/query/datatableoperator
parse_xml function: https://learn.microsoft.com/en-us/azure/kusto/query/parse-xmlfunction
mvexpand operator: https://learn.microsoft.com/en-us/azure/kusto/query/mvexpandoperator
make_bag aggregation function: https://learn.microsoft.com/en-us/azure/kusto/query/make-bag-aggfunction
pack function: https://learn.microsoft.com/en-us/azure/kusto/query/packfunction
bag_unpack plugin: https://learn.microsoft.com/en-us/azure/kusto/query/bag-unpackplugin

Related

Prevent user from overwriting query string parameter with Nginx rewrite

I have the following Nginx rewrite rule:
rewrite ^/([a-z0-9-]+)$ /post.php?slug=$1 last;
Nginx will now apply rewrites like the following:
/a -> /post.php?slug=a
/a?slug=b -> /post.php?slug=a&slug=b
The second example is problematic. How can I prevent visitors from adding query string parameters that are already added by rewrite rules? Other query string parameters may still be supplied. Examples of desired behavior:
/a -> /post.php?slug=a
/a?slug=b -> /post.php?slug=a
/a?foo=b -> /post.php?slug=a&foo=b
You can remove slug query argument:
location ~ ^/(?<slug>[a-z0-9-]+)$ {
if ($args ~ (.*)(^|&)slug=[^&]*(\2|$)&?(.*)) {
set $args $1$3$4;
}
rewrite ^ /post.php?slug=$slug last;
}
This complex regex would remove slug query argument from query arguments string regardless of whether it is at the beginning, in the middle or at the end of that string.

blocked by CORS policy: Request header field authorization is not allowed by Access-Control-Allow-Headers in preflight response

I have a web site 1 and a Web API 2
My Web API have a method name
public string Index()
{
return "Hello world from site 2";
}
In the controller Values.
I call from the web site 1 my API like that
$.ajax({
url: relativeUrl,
headers: { "Authorization": "Bearer " + accessToken },
type: "GET"
})
.done(function (result) {
console.log("Result: " + result);
alert(result);
})
.fail(function (result) {
console.log("Error: " + result.statusText);
alert(result.statusText);
});
But I have the following error in my js console.
Access to XMLHttpRequest at ‘Web API 2' from origin ‘Web site 1’ has
been blocked by CORS policy: Request header field authorization is not
allowed by Access-Control-Allow-Headers in preflight response.
I add in my controller :
[EnableCors(origins: "*", headers: "*", methods: "*", exposedHeaders: "X-Custom-Header")]
In my WebAPIConfig.cs
config.EnableCors();
And in my Web.config
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
<add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" />
</customHeaders>
</httpProtocol>
But even with that I have still the error, I don't understand what I need to add and where.
You've got
<add name="Access-Control-Allow-Headers" value="Content-Type" />
and
headers: { "Authorization": "Bearer " + accessToken },
In other words, the Access-Control setting only allows the "content-type" header, but your request is sending an "Authorization" header. Clearly these two things don't match up.
The error is very clearly telling you that the "authorization" request header is not being allowed by the Access-Control-Allow-Headers response header.
Try
<add name="Access-Control-Allow-Headers" value="Content-Type, Authorization" />
instead.
P.S. I don't think you need to use both the web.config settings and the EnableCors action filter at the same time. I think your EnableCors declaration here is redundant. See https://stackoverflow.com/a/29972098/5947043 for more info.
See https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers for more info
I don't know abouth this, But I have faced same problem in Node.
I think if you change this
<add name="Access-Control-Allow-Headers" value="Content-Type" />
to
<add name="Access-Control-Allow-Headers" value="*" />
or
<add name="Access-Control-Allow-Headers" value="Authorization" />
since you are calling Authorization header.

Angular Universal - Pre-render only for web crawlers?

I am intending to user Angular Universal for server side rendering (SSR) but this should only be done for crawlers and bots from selected search engines.
What I want is the following schema:
source: https://dingyuliang.me/use-prerender-improve-angularjs-seo/
After following the official instructions to set up SSR I can now validate that Googlebot (finally) "sees" my website and should be able to index it.
However, at the moment all requests are rendered on the server. Is there a way to determine whether incoming requests are coming from search engines and pre-render the site only for them?
You can achieve that with Nginx.
In Nginx you can forward the request to the universal served angular application via..
if ($http_user_agent ~* "googlebot|yahoo|bingbot") {
proxy_pass 127.0.0.1:5000;
break;
}
root /var/www/html;
..assuming that you are serving angular universal via 127.0.0.1:5000.
In case a browser user agent comes along, we serve the page via root /var/www/html
So the complete config would be something like..
server {
listen 80 default;
server_name angular.local;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
if ($http_user_agent ~* "googlebot|yahoo|bingbot") {
proxy_pass 127.0.0.1:5000;
break;
}
root /var/www/html;
}
}
This is what I came up with IIS:
Add the Angular Universal to your project according to the official guide
In order to get rid of complex folder structures, change the following line in server.ts
const distFolder = join(process.cwd(), 'dist/<Your Project>/browser');
to this:
const distFolder = process.cwd();
Run the npm run build:ssr command. You will end up with the browser and server folders inside the dist folder.
Create a folder for hosting in IIS and copy the files that are in the browser and server folders in to the created folder.
iis\
-assets\
-favicon.ico
-index.html
-main.js => this is the server file
-main-es2015.[...].js
-polyfills-es2015.[...].js
-runtime-es2015.[...].js
-scripts.[...].js
-...
Add a new file to this folder named web.config with this content:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<system.webServer>
<rewrite>
<rules>
<rule name="Angular Routes" stopProcessing="true">
<match url=".*" />
<conditions logicalGrouping="MatchAll">
<add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
<add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
<add input="{HTTP_USER_AGENT}" pattern="(.*[Gg]ooglebot.*)|(.*[Bb]ingbot.*)" negate="true" />
</conditions>
<action type="Rewrite" url="/index.html" />
</rule>
<rule name="ReverseProxyInboundRule1" stopProcessing="true">
<match url=".*" />
<conditions>
<add input="{HTTP_USER_AGENT}" pattern="(.*[Gg]ooglebot.*)|(.*[Bb]ingbot.*)" />
</conditions>
<action type="Rewrite" url="http://localhost:4000/{R:0}" />
</rule>
</rules>
</rewrite>
<directoryBrowse enabled="false" />
</system.webServer>
</configuration>
Inside this folder open a Command Prompt or PowerShell and run the following:
> node main.js
Now you should be able to view your Server-Side Rendered website with localhost:4000 (if you haven't changed the port)
Install the IIS Rewrite Module
Add the folder to your IIS for hosting
IIS will redirect requests that have googlebot or bingbot in them to localhost:4000 which is handled by Express and will return server-side rendered content.
You can test this with Google Chrome, open Developer Console, from the menu select "More tools>Network conditions". Then from the User Agent section disable "Select automatically" and choose Googlebot.
Just managed what you wanted but did not find any anwser providing a detailed step by step with Angular Universal and Express server.
So I post here my solution, any idea of improvement welcomed !
First add this function to the server.ts
function isBot(req: any): boolean {
let botDetected = false;
const userAgent = req.headers['user-agent'];
if (userAgent) {
if (userAgent.includes("Googlebot") ||
userAgent.includes("Bingbot") ||
userAgent.includes("WhatsApp") ||
userAgent.includes("facebook") ||
userAgent.includes("Twitterbot")
) {
console.log('bot detected with includes ' + userAgent);
return true;
}
const crawlers = require('crawler-user-agents');
crawlers.every(entry => {
if (RegExp(entry.pattern).test(userAgent)) {
console.log('bot detected with crawler-user-agents ' + userAgent);
botDetected = true;
return false;
}
return true;
})
if (!botDetected) console.log('bot NOT detected ' + userAgent);
return botDetected;
} else {
console.log('No user agent in request');
return true;
}
}
this function uses 2 modes to detect crawlers (and asumes that the absence of user-agent means that the request is from a bot), the first is a 'simple' manual detection of a string within the header's user-agent and secondly a more advanced detection based on the package 'crawler-user-agents' that you can install to your Angular project like this :
npm install --save crawler-user-agents
Second, once this function added to your server.ts, just use it in each
server.get(`/whatever`, (req: express.Request, res: express.Response) => {
}
of your Express server export function, for which the 'whatever' route should have a different behaviour based on Bot detection.
Your 'server.get()' functions become :
server.get(`/whatever`, (req: express.Request, res: express.Response) => {
if (!isBot(req)) {
// here if bot is not detected we just return the index.hmtl for CSR
res.sendFile(join(distFolder + '/index.html'));
return;
}
// otherwise we prerend
res.render(indexHtml, {
req, providers: [
{ provide: REQUEST, useValue: req }
]
});
});
To further improve the server load for SEO when a bot is requesting a page I also implemented 'node-cache' because in my case SEO bots do not need the very lastest version of each page, for this I found a good answer here :
#61939272

nginx proxy_pass and URL decoding

Original URL: /api/url%2Fencoded%2F/?with=queryParams
nginx:
location /api {
client_max_body_size 2G;
proxy_pass https://oursite;
}
With this configuration, I was able to preserve the URL encoding when passing through the proxy. If I add a "/" after "oursite", it will decode the URL.
Problem:
Now the URL after being proxied still contains "/api/". I need to remove "/api/" only while still preserving the URL encoded parts.
Not a long time ago there was identical question without an answer. In my opinion, you should rething api to not have such weird URLs. Another way is to have api on subdomain. – Alexey Ten Mar 11 '15 at 22:58
stackoverflow.com/q/28684300/1016033 – Alexey Ten Mar 11 '15 at 23:01
Year-old challenge accepted!
location /api/ {
rewrite ^ $request_uri;
rewrite ^/api/(.*) $1 break;
return 400;
proxy_pass http://127.0.0.1:82/$uri;
}
That's it, folks!
More details at Nginx pass_proxy subdirectory without url decoding, but it does work even with the query string, too:
% curl "localhost:81/api/url%2Fencoded%2F/?with=queryParams"
/url%2Fencoded%2F/?with=queryParams
%
Disclaimer: I am sure this looks like an hack - and maybe it is. It is using the auth-subrequest feature for something else than auth, but it works!
If you want to keep any url-encoded part after /api/ from the original $request_uri I use NJS to set a variable and use it afterwards in the proxy_pass
js_import /etc/nginx/conf.d/http.js; # Import your njs file here
js_set $encodedUrlPart 'empty'; # Define a variable
location ~* api\/(.*)$ {
auth_request /urlencode; #This will get executed before proxy_pass
proxy_pass http://127.0.0.1:82/$encodedUrlPart;
}
and the http.js can look like this
function urlencode(r){
let regex = "(?<=\/api\/)(.*$)";
let url = r.variables.request_uri # this holds the original, non touched url
let lastPart = url.match(regex);
r.variables.encodedUrlPart = lastPart;
r.log("The encoded url part: " + r.variables.encodedUrlPart);
r.return(200); // need to return 200 so the 'auth' doesn't fail
}
export default {urlencode};
Is this considered unsafe? We could do some checking in the njs part though!

Could not override http cache headers in IIS using PreSendRequestHeaders()

History:
Due to security considerations, our organization wants to disable caching by adding HTTP Headers to IIS.
Expires: -1
Pragma: no-cache
Cache Control: No-cache, No-store
Adding these headers cause MIME "application/vnd.ms-excel" response types to fail over SSL in IE6. Microsoft ackowledges this is as a bug (http://support.microsoft.com/kb/323308) and their solution also works. However, this solution has to pushed as a patch throughout the entire organization and that faces resistance from higher management.
Problem:
Meanwhile, we are trying to find alternatives by overriding IIS set HTTP headers for pages that have MIME type "application/vnd.ms-excel" using HTTPModules on PreSendRequestHeaders() function
//this is just a sample code
public void Init(HttpApplication context)
{
context.PreSendRequestHeaders += new EventHandler(context_PreSendRequestHeaders);
}
protected void context_PreSendRequestHeaders(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
if(application.Response.ContentType == "application/vnd.ms-excel; name=DataExport.xls")
{
application.Response.ClearHeaders();
application.Response.ContentType = "application/vnd.ms-excel; name=DataExport.xls";
application.Response.AddHeader("Content-Transfer", "Encoding: base64");
application.Response.AddHeader("Content-Disposition", "attachment;filename=DataExport.xls");
application.Response.AddHeader("cache-control","private");
}
}
Even after clearing headers using ClearHeaders(), IIS still appends Cache Headers before sending the response.
Questions:
Is this approach of using ClearHeaders() in PreSendRequestHeaders() function wrong?
Are they any alternatives to override cache headers(Expires,Pragma,cache-control) using libraries available in ASP.NET 1.1?
Misc:
We are using
Browser : IE6 SP 3
Server : IIS 6
Platform : .NET 1.1
This becomes easier with IIS 7.5+ using using the URL Rewrite extention and adding an outbound rule to strip off the "no-store" value in the Cache-Control header, and the Pragma header. This rule set would do the trick:
<outboundRules>
<rule name="Always Remove Pragma Header">
<match serverVariable="RESPONSE_Pragma" pattern="(.*)" />
<action type="Rewrite" value="" />
</rule>
<rule name="Remove No-Store for Attachments">
<conditions>
<add input="{RESPONSE_Content-Disposition}" pattern="attachment" />
</conditions>
<match serverVariable="RESPONSE_Cache-Control" pattern="no-store" />
<action type="Rewrite" value="max-age=0" />
</rule>
</outboundRules>
Please see:
Cache-control: no-store, must-revalidate not sent to client browser in IIS7 + ASP.NET MVC
You must use the following sequence of calls inside your PreSendRequestHeaders handler to correctly set the no-cache headers, otherwise the Cache-Control header gets overwritten later:
Response.Cache.SetCacheability(HttpCacheability.NoCache);
Response.Cache.AppendCacheExtension("no-store, must-revalidate");
Response.AppendHeader("Pragma", "no-cache");
Response.AppendHeader("Expires", "0");

Resources