Specify the starting cell of a Jupyter notebook - jupyter-notebook

Is there a convenient way to specify the starting cell of a Jupyter notebook?
I'm thinking of something along the lines of the HTML input autofocus Attribute.
For example,
{
"cells": [{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Run your first Bash command"
]
}, {
"autofocus": true,
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"echo \"Hello World\""
]
}]
}
The The Notebook file format doesn't have anything like this but maybe there's another solution out there.

Related

How to create a json file in R with list with lists as elements with fromJson()

I am trying to create a .json file. The third element should be a list with lists as elements.
What am I doing wrong?
Bellow is the json file I created with R:
{
"list1": [
"element1"
],
"list2": [
"element2"
],
"List_with_lists_as_elements": [
"Child1":{
"Name": "Child1",
"Child1_Title": [
"Title1",
"Title2",
"Title3"]
,
"Child1_Subtitle": [
"Subtitle_1",
"Subtitle_2",
"Subtitle_3"
]
},
"Child2":{
"Name": "Child2",
"Child2_Title": [
"Title1",
"Title2",
"Title3"]
,
"Child2_Subtitle": [
"Subtitle2_1",
"Subtitle2_2",
"Subtitle2_3"
]
},
"Child3":{
"Name": "Child3",
"Child2_Title": [
"Title1",
"Title2",
"Title3"]
,
"Child2_Subtitle": [
"Subtitle3_1",
"Subtitle3_2",
"Subtitle3_3"
]
}
]
}
I then save this as example_json.json and upload using fromJSON(txt = 'example_json.json'), and I have a error message, probably because I dont know quite well create a .json file:
Error in parse_con(txt, bigint_as_char) :
parse error: after array element, I expect ',' or ']'
_as_elements": [ "Child1":{ "Name": "Child1",
(right here) ------^
How can I create a .json file that gives me a list with lists() ?
The issue is that you have keys in your array :
...
"List_with_lists_as_elements": [
"Child1":{
"Name": "Child1",
...
},
"Child2":{
"Name": "Child2",
...
},
"Child3":{
"Name": "Child3",
...
}
]
...
You have a Name field which contains the key values, so you can probably just remove the keys:
...
"List_with_lists_as_elements": [
{
"Name": "Child1",
...
},
{
"Name": "Child2",
...
},
{
"Name": "Child3",
...
}
]
...

convert very large custom json to csv using jq bash

have a very large JSON data like below
{
"10.10.10.1": {
"asset_id": 1,
"referencekey": "ASSET-00001",
"hostname": "testDev01",
"fqdn": "ip-10-10.10.1.ap-northeast-2.compute.internal",
"network_zone": [
"DEV",
"Dev"
],
"service": {
"name": "TEST_SVC",
"account": "AWS_TEST",
"billing": "Testpay"
},
"aws": {
"tags": {
"Name": "testDev01",
"Service": "TEST_SVC",
"Usecase": "Dev",
"billing": "Testpay",
"OsVersion": "20.04"
},
"instance_type": "t3.micro",
"ami_imageid": "ami-e000001",
"state": "running"
}
},
"10.10.10.2": {
"asset_id": 3,
"referencekey": "ASSET-47728",
"hostname": "Infra_Live01",
"fqdn": "ip-10-10-10-2.ap-northeast-2.compute.internal",
"network_zone": [
"PROD",
"Live"
],
"service": {
"name": "Infra",
"account": "AWS_TEST",
"billing": "infra"
},
"aws": {
"tags": {
"Name": "Infra_Live01",
"Service": "Infra",
"Usecase": "Live",
"billing": "infra",
"OsVersion": "16.04"
},
"instance_type": "r5.large",
"ami_imageid": "ami-e592398b",
"state": "running"
}
}
}
Can I use JQ to make the conversion like below?
Or is there an easier way to solve it?
Thank you
Expected result
_key,asset_id,referencekey,hostname,fqdn,network_zone/0,network_zone/1,service/name,service/account,service/billing,aws/tags/Name,aws/tags/Service,aws/tags/Usecase,aws/tags/billing,aws/tags/OsVersion,aws/instance_type,aws/ami_imageid,aws/state
10.10.10.1,1,ASSET-00001,testDev01,ip-10-10.10.1.ap-northeast-2.compute.internal,DEV,Dev,TEST_SVC,AWS_TEST,Testpay,testDev01,TEST_SVC,Dev,Testpay,20.04,t3.micro,ami-e000001,running
10.10.10.2,3,ASSET-47728,Infra_Live01,ip-10-10-10-2.ap-northeast-2.compute.internal,PROD,Live,Infra,AWS_TEST,infra,Infra_Live01,Infra,Live,infra,16.04,r5.large,ami-e592398b,running
jq let's you do the conversion to CSV easily. The following code produces the desired output:
jq -r 'to_entries
| map([.key,
.value.asset_id, .value.referencekey, .value.hostname, .value.fqdn,
.value.network_zone[0], .value.network_zone[1],
.value.service.name, .value.service.account, .value.service.billing,
.value.aws.tags.Name, .value.aws.tags.Service, .value.aws.tags.Usecase, .value.aws.tags.billing, .value.aws.tags.OsVersion,
.value.aws.instance_type, .value.aws.ami_imageid, .value.aws.state])
| ["_key","asset_id","referencekey","hostname","fqdn","network_zone/0","network_zone/1","service/name","service/account","service/billing","aws/tags/Name","aws/tags/Service","aws/tags/Usecase","aws/tags/billing","aws/tags/OsVersion","aws/instance_type","aws/ami_imageid","aws/state"]
, .[]
| #csv' "$INPUT"
Remarks
If some nodes in the input JSON are missing, the code does not break but fills in empty values in the CSV file.
If more than two network zones are given, only the first two are covered in the CSV file

Azure ARM templates - empty values as a parameters, IF function

I am preparing ARM template for "Schedule update deployment" in Update Management service. I want to add parameters like: "excludedKbNumbers" and "includedKbNumbers". I am deploying my templates using powershell. When I am passing KB numbers using mentioned parameters templates completed successfully. In case when I am putting KB number using one of the mentioned parameters, second is empty, template completed successfully. Problem is when I dont want to pass Included/Exluded KB numbers, in my powershell deployment command I am not putting parameter names "excludedKbNumbers" and "includedKbNumbers", and then I am receiving below error: "message": "{\"Message\":\"The request is invalid.\",\"ModelState\":{\"softwareUpdateConfiguration.properties.updateConfiguration\":[\"Software update configuration has same KbNumbers in
includedKbNumbers and excludedKbNumbers.\"]}}"
I am using this structure in my template json('null') and this is a problematic area.
extract from my template:
"parameters": {
"excludedKbNumbers": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "Specify excluded KB numbers, required data structure: 123456"
}
},
"includedKbNumbers": {
"type": "string",
"defaultValue": "",
"metadata": {
"description": "Specify included KB numbers, required data structure: 123456"
}
},
"resources": [
{
"type": "Microsoft.Automation/automationAccounts/softwareUpdateConfigurations",
"apiVersion": "2017-05-15-preview",
"name": "[concat(parameters('automationAccountName'), '/', parameters('scheduleName'))]",
"properties": {
"updateConfiguration": {
"operatingSystem": "[parameters('operatingSystem')]",
"windows": {
"includedUpdateClassifications": "[parameters('Classification')]",
"excludedKbNumbers": [
"[if(empty(parameters('excludedKbNumbers')), json('null'), parameters('excludedKbNumbers'))]"
],
"includedKbNumbers": [
"[if(empty(parameters('includedKbNumbers')), json('null'), parameters('includedKbNumbers'))]"
],
"rebootSetting": "IfRequired"
},
"targets": {
"azureQueries": [
{
"scope": [
"[concat('/subscriptions', '/', parameters('subscriptionID'))]"
],
"tagSettings": {
"tags": {
"[parameters('tagKey')]": [
"[parameters('tagValue')]"
]
},
"filterOperator": "All"
},
"locations": []
}
]
},
"duration": "PT2H"
},
"tasks": {},
"scheduleInfo": {
"isEnabled": false,
"startTime": "2050-03-03T13:10:00+01:00",
"expiryTime": "2050-03-03T13:10:00+01:00",
"frequency": "OneTime",
"timeZone": "Europe/Warsaw"
}
}
}
],
try doing this:
"excludedKbNumbers": "[if(empty(parameters('excludedKbNumbers')), json('null'), array(parameters('excludedKbNumbers')))]",
"includedKbNumbers": "[if(empty(parameters('includedKbNumbers')), json('null'), array(parameters('includedKbNumbers')))]"

How does allure get the test status in categories.json

After executing test I am getting the XML files in allure-results directory. From there I am generating the HTML report using the command:
allure generate allure-results --clean -o allure-report
In allure-results I have a categories.json file, which is used to categorize tests in the HTML report based on its result (eg: passed, broken, failed, ...). I believe this categorization is done by allure.
So I want to know on what basis allure does this categorization.
categories.json
{
"name": "Ignored tests",
"messageRegex": ".*ignored.*",
"matchedStatuses": [ "skipped" ],
"flaky": true
},
{
"name": "Infrastructure problems",
"traceRegex": ".*RuntimeException.*",
"matchedStatuses": [ "broken", "failed" ]
},
{
"name": "Outdated tests",
"messageRegex": ".*FileNotFound.*",
"matchedStatuses": [ "broken" ]
},
{
"name": "Passed",
"messageRegex": ".*",
"matchedStatuses": [ "passed" ]
}
Sample test report image:
categories.json should be a list of mappings.
In your case it should looks like
[
{
"name": "Ignored tests",
"messageRegex": ".*ignored.*",
"matchedStatuses": ["skipped"],
"flaky": true
},
{
"name": "Infrastructure problems",
"traceRegex": ".*RuntimeException.*",
"matchedStatuses": ["broken", "failed"]
},
{
"name": "Outdated tests",
"messageRegex": ".*FileNotFound.*",
"matchedStatuses": ["broken"]
},
{
"name": "Passed",
"messageRegex": ".*",
"matchedStatuses": ["passed"]
}
]

How to create/modify a jupyter notebook from code (python)?

I am trying to automate my project create process and would like as part of it to create a new jupyter notebook and populate it with some cells and content that I usually have in every notebook (i.e., imports, titles, etc.)
Is it possible to do this via python?
You can do it using nbformat. Below an example taken from Creating an IPython Notebook programatically:
import nbformat as nbf
nb = nbf.v4.new_notebook()
text = """\
# My first automatic Jupyter Notebook
This is an auto-generated notebook."""
code = """\
%pylab inline
hist(normal(size=2000), bins=50);"""
nb['cells'] = [nbf.v4.new_markdown_cell(text),
nbf.v4.new_code_cell(code)]
fname = 'test.ipynb'
with open(fname, 'w') as f:
nbf.write(nb, f)
This is absolutely possible. Notebooks are just json files. This
notebook for example is just:
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Header 1"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"ExecuteTime": {
"end_time": "2016-09-16T16:28:53.333738",
"start_time": "2016-09-16T16:28:53.330843"
},
"collapsed": false
},
"outputs": [],
"source": [
"def foo(bar):\n",
" # Standard functions I want to define.\n",
" pass"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Header 2"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.10"
},
"toc": {
"toc_cell": false,
"toc_number_sections": true,
"toc_threshold": 6,
"toc_window_display": false
}
},
"nbformat": 4,
"nbformat_minor": 0
}
While messy it's just a list of cell objects. I would probably create my template in an actual notebook and save it rather than trying to generate the initial template by hand. If you want to add titles or other variables programmatically, you could always copy the raw notebook text in the *.ipynb file into a python file and insert values using string formatting.

Resources