ASP.NET Core System.Runtime not found on test - asp.net

Been trying to run ASP.NET Core 1.1 xunit tests coverage from PowerShell with no success. When running I get the following error:
System.IO.FileNotFoundException: Could not load file or assembly 'System.Runtime, Version=4.1.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified
.
On PowerShell line (the last one):
$solutionDir = "C:\Projects\AdministrationPortal.RestAPI"
$openCoverDir = (Get-ChildItem $packagesDir -filter "OpenCover*" -Directory | % { $_.fullname })
$openCoverRunner = "$openCoverDir\tools\OpenCover.Console.exe"
$packagesDir = $solutionDir + "\packages"
$xunitRunnerDir = (Get-ChildItem $packagesDir -filter "xunit.runner.console*" -Directory | % { $_.fullname })
$xunitRunner = "$xunitRunnerDir\tools\xunit.console.exe"
$unitTestsProjDir = (Get-ChildItem $solutionDir\test -filter "*Test*" - Directory | % { $_.fullname })
$testsDllDir = "$unitTestsProjDir\bin\Debug\netcoreapp1.1"
$testDllFile = (Get-ChildItem $testsDllDir -File | Where-Object {$_.Name -like "*Test*.dll" -and $_.Name -notlike "*xunit.runner.visualstudio.testadapter*" } )
$testDll = "$testsDllDir\$testDllFile"
$categories = "Integration;Unit"
$nameSpaceToTest = "AdminPortal.RestAPI.Areas.FeatureToggle.Services;AdminPortal.RestAPI.Areas.FeatureToggle.Storage;AdminPortal.RestAPI.Areas.Text.Services;AdminPortal.RestAPI.Areas.Text.Storage"
$nameSpaceToSkip = ""
$assemblyToTest = "AdminPortal.RestAPI"
$categoriesArray = (($categories -split ';') | ? {$_})
$nameSpaceToTestArray = (($nameSpaceToTest -split ';') | ? {$_})
$nameSpaceToSkipArray = (($nameSpaceToSkip -split ';') | ? {$_})
ForEach ($item In $nameSpaceToTestArray) {
$nameSpaceArray += "+[$assemblyToTest*]" + $item + "* " }
ForEach ($item In $nameSpaceToSkipArray) {
$nameSpaceArray += "-[$assemblyToTest*]" + $item + "* " }
$coverageReportDir = "C:\tmp\Coverage"
foreach ($item in $categoriesArray) {
$coverageReportXML = $coverageReportDir + "\coverage." + $item + ".xml"
Write-Output $coverageReportXML
& $openCoverRunner -register:user -target:"$xunitRunner" "-targetargs:$testDll" -targetdir:"$testsDllDir" -output:"$coverageReportXML" "-filter:$nameSpaceArray"
}
The initial thought was that .NET Framework 4.6 does not have System.Runtime but then I added additional framework imports that do have it, yet the results are the same. project.json file of the test project:
{
"version": "1.0.0-*",
"testRunner": "xunit",
"dependencies": {
"Microsoft.NETCore.App": {
"type": "platform",
"version": "1.1.0"
},
"xunit": "2.2.0-beta5-build3474",
"dotnet-test-xunit": "2.2.0-preview2-build1029",
"xunit.runner.visualstudio": "2.2.0-beta3-build1187",
"Moq": "4.6.38-alpha",
"System.Linq": "4.3.0",
"Microsoft.DotNet.InternalAbstractions": "1.0.0",
"OpenCover": "4.6.519",
"ReportGenerator": "2.5.2",
"Microsoft.CodeCoverage": "1.0.2",
"xunit.runner.console": "2.2.0-beta5-build3474",
"System.Runtime": "4.3.0",
"AdminPortal.RestAPI": "1.0.0-*"
},
"frameworks": {
"netcoreapp1.1": {
"imports": [
"dnxcore50",
"dotnet5.6",
"portable-net46"
]
}
}
}

Related

Updating an array element identified by other fields in the object using jq

Goal
I'd like to add a proxy-url field to the currently active clusters entry of my kubeconfig file. The "active" cluster is identified by the "active" context, which is itself identified by a top-level key current-context. Simplified, the JSON object looks something like:
{
"clusters":[
{
"name":"cluster1",
"field":"field1"
},
{
"name":"cluster2",
"field":"field2"
}
],
"contexts":[
{
"name":"context1",
"context": {
"cluster":"cluster1"
}
},
{
"name":"context2",
"context": {
"cluster":"cluster2"
}
}
],
"current-context": "context1"
}
And I'd like to update the clusters entry for cluster1 from:
{
"name":"cluster1",
"field":"field1"
}
to
{
"name":"cluster1",
"field":"field1",
"proxy-url":"my-url"
}
First attempt
jq '. as $o
| $o."current-context" as $current_context_name
| $o.contexts[] | select(.name == $current_context_name) as $context
| $o.clusters[] | select(.name == $context.context.cluster)
| .proxy_id |= "my-url"'
gives me
{
"name": "cluster1",
"field": "field1",
"proxy_id": "my-url"
}
-- great! But I need the rest of the object too.
Parentheses almost work
With parentheses, I can get the whole object back & add a "proxy-url" field to the active context, but I can't take it one step further to update the active cluster. This filter:
jq '(. as $o
| $o."current-context" as $current_context_name
| $o.contexts[] | select(.name == $current_context_name)
| ."proxy-url")
|= "my-url"'
works mint:
{
"clusters": [...], // omitted for brevity, unchanged
"contexts": [
{
"name": "context1",
"context": {
"cluster": "cluster1"
},
"proxy-url": "my-url" // tada!
},
{...} // omitted for brevity, unchanged
],
"current-context": "context1"
}
Trying to take it one step further (to update the cluster identified by that context, instead):
jq '(. as $o
| $o."current-context" as $current_context_name
| $o.contexts[] | select(.name == $current_context_name) as $context
| $o.clusters[] | select(.name == $context.context.cluster)
| ."proxy-url")
|= "my-url"'
gives me the following error:
jq: error (at <stdin>:26): Invalid path expression near attempt to access element "clusters" of {"clusters":[{"name":"clus...
exit status 5
How can I use the $context.context.cluster result to update the relevant clusters entry? I don't understand why this approach works for adding something to contexts but not to clusters.
Dirty solution
I can kludge together a new clusters entry & merge that with the top-level object:
jq '. as $o
| $o."current-context" as $current_context_name
| $o.contexts[] | select(.name == $current_context_name) as $context
| $o + {"clusters": [($o.clusters[] | select(.name == $context.context.cluster)."proxy-url" |= "my-url")]}
but this feels a bit fragile.
This solution retrieves the active cluster using an INDEX construction, then just sets the new field directly without modifying the context:
jq '
INDEX(.contexts[]; .name)[."current-context"].context.cluster as $cluster
| (.clusters[] | select(.name == $cluster))."proxy-url" = "my-url"
'
{
"clusters": [
{
"name": "cluster1",
"field": "field1",
"proxy-url": "my-url"
},
{
"name": "cluster2",
"field": "field2"
}
],
"contexts": [
{
"name": "context1",
"context": {
"cluster": "cluster1"
}
},
{
"name": "context2",
"context": {
"cluster": "cluster2"
}
}
],
"current-context": "context1"
}
Demo

File uploads to s3 using Meteor Slingshot stopped working with error 403

I have been using meteor-slingshot to upload files to amazon S3 for some time. All of a sudden none of my upload functions seems to work. I am getting the following error when I try to upload files.
Error uploading errorClass {error: "Forbidden - 403", reason: "Failed to upload file to cloud storage", details: undefined, message: "Failed to upload file to cloud storage [Forbidden - 403]", errorType: "Meteor.Error"…}
All the uploads were working fine last week, and all of a sudden I am getting the above error in all my projects. I don't know if its some problem with the slingshot package or if some change has been made to s3 policies.
This is the code I am using
Slingshot.fileRestrictions("myFileUploads", {
allowedFileTypes: ["image/png", "image/jpeg", "image/gif" ,],
maxSize: null // 10 MB (use null for unlimited).
});
Slingshot.createDirective("myFileUploads", Slingshot.S3Storage, {
bucket: "*****",
acl: "public-read",
region : "ap-southeast-1",
AWSAccessKeyId : "MyAccessKey",
AWSSecretAccessKey : "MySecretKey",
authorize: function () {
return true;
},
key: function (file) {
var imageName = getUniqueID()
return "images/" + imageName;
}
});
getUniqueID = function(){
this.length = 8;
this.timestamp = +new Date;
var ts = this.timestamp.toString();
var parts = ts.split( "" ).reverse();
var id = "";
var _getRandomInt = function( min, max ) {
return Math.floor( Math.random() * ( max - min + 1 ) ) + min;
}
for( var i = 0; i < this.length; ++i ) {
var index = _getRandomInt( 0, parts.length - 1 );
id += parts[index];
}
return id;
}
This is my CORS configuration
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
And my bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::MyBucketName/*"
}
]
}

Meteor, Apollo & Sequelize: Cannot find module 'config.json'

Recently added a fresh install of meteor alongside apollo and sequelize, created a .sequelizerc file which works as required but whenever I run meteor it fails with: Error: Cannot find module '/lib/database/mysql/models/..config.json'
Application Structure:
/.meteor
/client
/lib
/database
/mysql
/migrations
/models
index.js
/seeders
config.json
/node_modules
/.bin
.sequelizerc
/server
.sequelizerc file
var path = require('path');
module.exports = {
'config': path.resolve('../../lib/database/mysql', 'config.json'),
'migrations-path': path.resolve('../../lib/database/mysql', 'migrations'),
'models-path': path.resolve('../../lib/database/mysql', 'models'),
'seeders-path': path.resolve('../../lib/database/mysql', 'seeders'),
}
/lib/database/models/index.js file
var fs = require('fs');
var path = require('path');
var Sequelize = require('sequelize');
var basename = path.basename(module.filename);
var env = process.env.NODE_ENV || 'development';
var config = require(__dirname + '/..\config.json')[env];
var db = {};
if (config.use_env_variable) {
var sequelize = new Sequelize(process.env[config.use_env_variable]);
} else {
var sequelize = new Sequelize(config.database, config.username, config.password, config);
}
fs
.readdirSync(__dirname)
.filter(function(file) {
return (file.indexOf('.') !== 0) && (file !== basename) && (file.slice(-3) === '.js');
})
.forEach(function(file) {
var model = sequelize['import'](path.join(__dirname, file));
db[model.name] = model;
});
Object.keys(db).forEach(function(modelName) {
if (db[modelName].associate) {
db[modelName].associate(db);
}
});
db.sequelize = sequelize;
db.Sequelize = Sequelize;
module.exports = db;
package.json
{
"name": "meteor-apollo-sequelize",
"private": true,
"scripts": {
"start": "meteor run"
},
"dependencies": {
"apollo-client": "^0.3.12",
"apollo-server": "^0.1.1",
"express": "^4.14.0",
"graphql": "^0.6.2",
"graphql-tools": "^0.6.4",
"meteor-node-stubs": "^0.2.3",
"mysql": "^2.11.1",
"sequelize": "^3.24.0",
"sequelize-cli": "^2.4.0"
}
}
Fix your backslash/forward slash - change this line:
var config = require(__dirname + '/..\config.json')[env];
to
var config = require(__dirname + '/../config.json')[env];

weird `Method cannot be called on possibly null / undefined value`

The following narrowed-down code:
// #flow
'use strict';
import assert from 'assert';
class Node<V, E> {
value: V;
children: ?Map<E, Node<V,E>>;
constructor(value: V) {
this.value = value;
this.children = null;
}
}
function accessChildren(tree: Node<number, string>): void {
if (tree.children!=null) {
assert(true); // if you comment this line Flow is ok
tree.children.forEach( (v,k)=>{});
} else {
}
}
… fails Flow type checking with the following message:
$ npm run flow
> simple-babel-serverside-node-only-archetype#1.0.0 flow /home/blah/blah/blah
> flow; test $? -eq 0 -o $? -eq 2
es6/foo.js:21
21: tree.children.forEach( (v,k)=>{});
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ call of method `forEach`. Method cannot be called on possibly null value
21: tree.children.forEach( (v,k)=>{});
^^^^^^^^^^^^^ null
es6/foo.js:21
21: tree.children.forEach( (v,k)=>{});
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ call of method `forEach`. Method cannot be called on possibly undefined value
21: tree.children.forEach( (v,k)=>{});
^^^^^^^^^^^^^ undefined
Found 2 errors
If the line reading: assert(true) is commented out, flow is satisfied !
What gives?
PS: In case anyone wonders, my .flowconfig, .babelrc and package.json files are nondescript:
.flowconfig
$ cat .flowconfig
[options]
esproposal.class_static_fields=enable
.babelrc
$ cat .babelrc
{
"presets": ["es2015"],
"plugins": ["transform-object-rest-spread", "transform-flow-strip-types", "transform-class-properties"]
}
package.json
$ cat package.json
{
"name": "simple-babel-serverside-node-only-archetype",
"version": "1.0.0",
"description": "",
"main": [
"index.js"
],
"scripts": {
"build": "babel es6 --out-dir es5 --source-maps",
"build-watch": "babel es6 --out-dir es5 --source-maps --watch",
"start": "node es5/index.js",
"flow": "flow; test $? -eq 0 -o $? -eq 2"
},
"author": "",
"license": "ISC",
"devDependencies": {
"babel-cli": "^6.6.5",
"babel-core": "^6.7.4",
"babel-plugin-transform-class-properties": "^6.10.2",
"babel-plugin-transform-flow-strip-types": "^6.8.0",
"babel-polyfill": "^6.7.4",
"babel-preset-es2015": "^6.9.0",
"babel-runtime": "^6.6.1",
"flow-bin": "^0.27.0"
},
"dependencies": {
"babel-plugin-transform-object-rest-spread": "^6.8.0",
"babel-polyfill": "^6.7.4",
"source-map-support": "^0.4.0"
}
}
Your case is described here.
Flow cannot know, that assert doesn't change the tree.
Add the following lines to your code and run it – you will have runtime error, because assert function will set tree.children to null when called.
const root = new Node(1);
const child = new Node(2);
root.children = new Map([['child', child]]);
assert = () => root.children = null;
accessChildren(root);
Yes, it is pretty weird code, but Flow doesn't know, you will not to write it.
Others have pointed to the right explanation. Fortunately this works:
// #flow
'use strict';
import assert from 'assert';
class Node<V, E> {
value: V;
children: ?Map<E, Node<V, E>>;
constructor(value: V) {
this.value = value;
this.children = null;
}
}
function accessChildren(tree: Node<number, string>): void {
const children = tree.children; // save possibly mutable reference to local
if (children != null) {
assert(true); // if you comment this line Flow is ok
children.forEach((v, k) => {});
} else {
}
}
Also, in the future Flow will have read-only properties and by declaring children as a read-only property in the class, Flow should be able to preserve type check the original code.

How to execute tasks based on a subfolder with Grunt and Grunt-Watch

I want to be able to have different subprojects inside my main project. For example:
-- my-project/
- Gruntfile.js
- subproject1/
- index.html
- scss/
- main.scss
- subproject2/
- index.html
- scss/
- main.scss
I want to be able to modify a file in subproject1 without triggering subproject2 tasks.
As of right now I'm configuring my gruntfile like so:
watch: {
subproject1: {
files: ['subproject1/*.html', 'subproject1/scss/**/*.scss'],
tasks: ['sass', 'premailer:subproject1']
},
subproject2: {
files: ['subproject2/*.html', 'subproject2/scss/**/*.scss'],
tasks: ['sass', 'premailer:subproject2']
}
},
premailer: {
subproject1: {
options: {
css: 'subproject1/css/main.css',
verbose: false
},
files: [
{
'subproject1/dist/index.html' : 'subproject1/index.html'
}
]
},
subproject2: {
options: {
css: 'subproject2/css/main.css',
verbose: false
},
files: [
{
'subproject2/dist/index.html' : 'subproject2/index.html'
}
]
},
}
Is there a way to dynamically specify to grunt what task to run depending on file modified (eg, I modify folder/index.html, then run premailer:folder) or is this the only way to achieve it ?
You can check all folders in your main folder inside your Gruntfile, using the grunt.file methods (docs here), create an array of subproject names and then using forEach to create your task dynamically.
Something like this should go:
/*global module:false*/
module.exports = function(grunt) {
var mycwd = "./";
var tempFileList = grunt.file.expand(
{ filter: function (src) {
if (grunt.file.isDir(src) == true) {
return true;
}
return false;
} },
[ mycwd + "!(Gruntfile.js|node_modules|package.json)" ] // structure to analyse
);
// I create an empty array to put all elements in, once cleaned.
var fileList = [];
tempFileList.forEach(function(url){
var cwd = url;
cwd = cwd.replace(mycwd, "");
fileList.push(cwd);
})
var watchObject = {};
var premailerObject = {};
fileList.forEach(function(name) {
watchObject[name] = {
files: [name + '/*.html', name + '/scss/**/*.scss'],
tasks: ['sass', 'premailer:' + name]
};
var filesObject = {};
filesObject[name+'/css/main.css'] = name + '/index.html';
premailerObject[name] = {
options: { css: name + '/css/main.css', verbose: false },
files: [ filesObject ]
};
});
var configObject = {
watch: watchObject,
premailer: premailerObject
};
// just to check the final structure
console.log(configObject);
grunt.initConfig(configObject);
};

Resources