Refresh Events in PrimeNG schedule in angular 6 - fullcalendar

I want to refresh event on dropdown change by calling an api but it is giving me below error and not refreshing the events in the schedule control. I am using angular 6 version, PrimeNG 6.1.5 and fullcalendar 4.0.0-alpha version.
CalendarDisplayComponent.html:2 ERROR TypeError: this.calendar.removeEventSources is not a function
at Schedule.push../node_modules/primeng/components/schedule/schedule.js.Schedule.ngDoCheck (schedule.js:248)
at checkAndUpdateDirectiveInline (core.js:9253)
at checkAndUpdateNodeInline (core.js:10514)
at checkAndUpdateNode (core.js:10476)
at debugCheckAndUpdateNode (core.js:11109)
at debugCheckDirectivesFn (core.js:11069)
at Object.eval [as updateDirectives] (CalendarDisplayComponent.html:2)
at Object.debugUpdateDirectives [as updateDirectives] (core.js:11061)
at checkAndUpdateView (core.js:10458)
at callViewAction (core.js:10699)
Ts file :`
this.baseService.getcalendarSearchResult(event).subscribe(resp => {
if (resp) {
this.events = [];
this.events = [
{ "Id": 384596, "title": "HR-Infotag", "start": "2018-10-16T08:00:00", "end": "2018-10-16T18:00:00", "editable": false, "Overlap": false, "ClassName": "" },
{ "Id": 384597, "title": "HR-Infotag", "start": "2018-10-17T08:00:00", "end": "2018-10-17T18:00:00", "editable": false, "Overlap": false, "ClassName": "" }
];
// this._sharedService.setCalendarAPIResponse(this.events)
//this.router.
}
})
HTML file code :
p-schedule [events]="events" [header]="header" [defaultDate]="defaultDate" [editable]="true" [options]="options"></p-schedule>
Thanks in advance !

I have did some Rnd on the above question and finally I have change the angular version from 6 to 5.2 and according I have change the prime ng and full calendar(3.9) versions and now everything is working fine.
I have read in one of th article that there is compatibility issue between angular 6 and prime ng schedule

Related

Array in Pinia becomes proxy object. How can I use it with include() in a vue 3 template

I have an this in my pinia store:
mainNavigation: [
{
"_uid": "9acd58dd-a9de-4de1-bc2b-3e781cc42a59",
"title": "Alla Produkter",
"design": ["sale"],
"divider": false,
"component": "NestableNav",
"permission": "all",
"mobileButton": "Visa alla produkter",
"desktopDesign": ["menuicon"],
},
// more objects
]
When i try to use them in Vue, both in setup and in the template;
item.desktopDesign.includes('menuicon') I get an error 500
If I do a typeof it says it's an object, a proxy object. With 0 as key.
I've tried .value but it doesn't work in code and shouldn't be necessary in the template.
Thanks

xdmOptions with remark-disable-tokenizers to disable "indented codeblock"

I'm creating a blog using Next.js + MDX-Bundler and trying to use remark-disable-tokenizers to disable "indented codeblock". But i'm not able to make it work. I found a reference here which says that we can use remark-disable-tokenizers for this purpose.
Here is my xdmoptions for reference:
import disableTokens from 'remark-disable-tokenizers';
xdmOptions(options) {
options.rehypePlugins = [
...(options.rehypePlugins ?? []),
rehypeSlug,
rehypeCodeTitles,
rehypePrism,
[disableTokens,
{
block: [
['indentedCode', 'indented code is not supported by MDX-Bundler']
]
}],
[
rehypeAutolinkHeadings,
{...}
]
];
return options;
},

Behavior change in speech-rule-engine 3.2?

I'm using the speech-rule-engine to generate English text from MathML. When trying to upgrade from v3.1.1 to v3.2.0 I'm seeing tests fail for reasons I don't understand.
I created a simple two file project that illustrates the issue:
package.json
{
"name": "failure-example",
"license": "UNLICENSED",
"private": true,
"engines": {
"node": "14.15.5",
"npm": "6.14.11"
},
"scripts": {
"test": "jest"
},
"dependencies": {
"speech-rule-engine": "3.2.0"
},
"devDependencies": {
"jest": "^26.6.3"
},
"jest": {
"notify": false,
"silent": true,
"verbose": true
}
}
example.test.js
const sre = require('speech-rule-engine');
beforeAll(() => {
sre.setupEngine({
domain: 'mathspeak'
});
});
test('simple single math', () => {
expect(JSON.parse(JSON.stringify(sre.engineSetup(), ['domain', 'locale', 'speech', 'style'])))
.toEqual({
locale: 'en',
speech: 'none',
style: 'default',
domain: 'mathspeak',
});
expect(sre.engineReady())
.toBeTruthy();
expect(sre.toSpeech('<math><mrow><msup><mn>3</mn><mn>7</mn></msup></mrow></math>'))
.toBe('3 Superscript 7');
});
Running npm install and npm run test results in a failure because SRE is returning 37 instead of 3 Superscript 7. Editing package.json to use v3.1.1 of the engine and rerunning results in a passing test.
Obviously something has changed, but I'm totally missing what I need to do to adapt. Has anyone else encountered this, or see what I clearly do not?
Problem solved, with the help of the maintainer of SRE. The problem is not in 3.2.0, but that jest does not wait for sre to be ready. The test was only correct by a fluke in 3.1.1 as the rules were compiled into the core. The following test fails with the above setup in 3.1.1 as well as the locale is not loaded:
expect(sre.toSpeech('<math><mo>=</mo></math>'))
.toBe('equals');
Expected: "equals"
Received: "="
The main reason is that jest fails to load the locale file. Setting "silent": false will show the error:
Unable to load file: /tmp/tests/node_modules/speech-rule-engine/lib/mathmaps/en.js
TypeError: Cannot read property 'readFileSync' of null
The reason for this error is that jest does not know that it runs in node. Adding:
"testEnvironment": "node",
to the jest configuration in package.json causes the expected behavior.

Jupyter Notebook validation failed

When I open a file in jupyter notebook, I get this error.
Notebook validation failed: {'model_id': '47c3f67b01814c2baeeca6efa0a79e2d', 'version_major': 2, 'version_minor': 0} is not valid under any of the given schemas:
{
"model_id": "47c3f67b01814c2baeeca6efa0a79e2d",
"version_major": 2,
"version_minor": 0
}
I had the same problem and this solution works like a charm:
https://github.com/jupyter/nbformat/issues/161
TLDR: open your file on a text editor and replace "nbformat_minor": 1 with "nbformat_minor": 4
Check if you activated conda environment correctly. In my case conda environment was not correctly activated so that was the problem. As I activated correctly everything worked normal.
I think different issues cause the error, one of them is:
The bug happens because the code blocks from Colab are missing "outputs':[] and "execution_count":null.
{
"cell_type": "code",
"metadata": {
"id": "e2769b9e0005"
},
"source": [
"!pip install git+https://github.com/google/starthinker\n"
]
},
To fix add the missing fields to the colab source for any code block only not markdown:
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"metadata": {
"id": "e2769b9e0005"
},
"source": [
"!pip install git+https://github.com/google/starthinker\n"
]
},

How to I prevent Microsoft.Automation/automationAccounts/Compilationjobs to always run in ARM deployment?

My ARM template is below which is nested template in bigger ARM template. For some reason DSC Compilation job always run on each deployment. I expected it not be run if it was already run before. How do I control this behavior? I tried using "incrementNodeConfigurationBuild": "false" but it did not do the trick.
{
"name": "WorkerNodeDscConfiguration",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2017-05-10",
"resourceGroup": "[parameters('automationAccountRGName')]",
"dependsOn": [],
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.1",
"resources": [
{
"apiversion": "2015-10-31",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"name": "[parameters('automationAccountName')]",
"type": "Microsoft.Automation/automationAccounts",
"properties": {
"sku": {
"name": "Basic"
}
},
"tags": {},
"resources": [
{
"name": "workernode",
"type": "configurations",
"apiVersion": "2018-01-15",
"location": "[reference(variables('automationAccountResourceId'), '2018-01-15','Full').location]",
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]"
],
"properties": {
"state": "Published",
"overwrite": "false",
"incrementNodeConfigurationBuild": "false",
"Source": {
"Version": "1.2",
"type": "uri",
"value": "[parameters('WorkerNodeDSCConfigURL')]"
}
}
},
{
"name": "[guid(resourceGroup().id, deployment().name)]",
"type": "Compilationjobs",
"apiVersion": "2018-01-15",
"tags": {},
"dependsOn": [
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'))]",
"[concat('Microsoft.Automation/automationAccounts/', parameters('AutomationAccountName'),'/Configurations/workernode')]"
],
"properties": {
"configuration": {
"name": "workernode"
},
"incrementNodeConfigurationBuild": "false",
"parameters": {
"WebServerContentURL": "[parameters('WebServerContentURL')]"
}
}
}
]
}
]
}
}
}
In short, AFAIK you should be able to control this behaviour with 'condition'.
To explain it in detail, the DSC compilation jobs resource always run on each deployment because when we use the DSC compilation jobs resource (i.e., Microsoft.Automation/automationAccounts/compilationjobs) in the ARM template, IMHO what it does in the behind is, basically clicks on 'Compile' button of the DSC configuration.
If you click on that 'Compile' button, the compilation of job happens for sure even if it already compiled the job. You may check the same part manually as well.
So AFAIK that was the reason for compilation job always running on each deployment.
What you could do is, update your ARM template with 'condition' (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-manager-templates-resources#condition and https://learn.microsoft.com/en-us/azure/architecture/building-blocks/extending-templates/conditional-deploy) and then wrap your template with below sample piece of PowerShell code that would determine if the Compilation of job for particular DSC configuration is done already and then deploy the template by passing inline parameter value or by updating condition parameter in parameters template file with new or existing value accordingly. (For more information, refer https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-template-deploy#pass-parameter-values)
$DscCompilationJob = Get-AzAutomationDscCompilationJob -AutomationAccountName AUTOMATIONACCOUNTNAME -ResourceGroupName RESOURCEGROUPNAME|Sort-Object -Descending -Property CreationTime|Select -First 1| Select Status
$DscCompilationJobStatus = $DscCompilationJob.Status
if ($DscCompilationJobStatus -ne "Completed"){
$DscCompilationJobStatusInlineParameter = "new"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with new value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}else{
$DscCompilationJobStatusInlineParameter = "existing"
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName testgroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -exampleString $DscCompilationJobStatusInlineParameter
#or update condition parameter in parameters template file with existing value accordingly and use below command to deploy the template
New-AzResourceGroupDeployment -Name ExampleDeployment -ResourceGroupName ExampleResourceGroup -TemplateFile TEMPLATEFILEPATH\demotemplate.json -TemplateParameterFile TEMPLATEFILEPATH\demotemplate.parameters.json
}
And regarding incrementNodeConfigurationBuild property, IMHO this property is just with regards to creation of a new build version of Node Configuration is required or not i.e., when incremental node configuration build is set to false, it does not override the earlier existing Node Configuration by creating a new Node Configuration with the name CONFIGNAME[<2>] (the version number is incremented based on the existing version number already present).
Hope this helps!! Cheers!! :)

Resources