Swagger API definition load API definition - Cant use schema id "$Module" - .net-core

I am getting this exception when I want to create a Domain object with name Module
InvalidOperationException: Can't use schemaId "$Module" for type "$Pro.Core.Domain.Module". The same schemaId is already used for type "$System.Reflection.Module"
Swashbuckle.AspNetCore.SwaggerGen.SchemaRepository.RegisterType(Type type, string schemaId)
Seems like this name Module is conflicting with some System.Reflection.Module.
I have searched the internet and I have below 2 solutions that can get this working:
Rename my Module class name to something else but not Module(lets say MyModule)
Doing something like below
public void ConfigureServices(IServiceCollection services)
{
services.AddSwaggerGen(config =>
{
//some swagger configuration code.
//use fully qualified object names
config.CustomSchemaIds(x => x.FullName);
}
}
But I want to understand that what is there in name Module that creates this error. I don't see any reason for not being allowed to use this name as my domain class name.
Is there any way to use the Domain Object with name Module other that mentioned above in point 2 and why is this happening in the first place?

Fixed by using
services.AddSwaggerGen(options =>
{
options.CustomSchemaIds(type => type.ToString());
});
Find more details from
here

Related

Extend and override default lighthouse directive

Is there anyway that I can override a directive like: src/Schema/Directives/WhereDirective.php for instance this doesn't support some methods on my custom builder, I know I can make another directive and extend this like #myWhere but that's dirty, would be nice to be able to override the #where itself.
I've searched around but nothing was found about this sadly!
I edit my composer.json and manipulate the class mappings. In this example, I wanted to override some cache classes.
"autoload": {
"psr-4": {
"App\\": "app/",
"Database\\Factories\\": "database/factories/",
"Database\\Seeders\\": "database/seeders/",
"Nuwave\\Lighthouse\\Cache\\": "lighthouseV6/cache/"
},
"exclude-from-classmap": [
"vendor/nuwave/lighthouse/src/Cache/CacheKeyAndTags.php",
"vendor/nuwave/lighthouse/src/Cache/CacheKeyAndTagsGenerator.php",
"vendor/nuwave/lighthouse/src/Cache/CacheDirective.php"
]
},
Then create a folder "lighthouseV6/cache" in the root of the project and copy the classes I wanted to override from "vendor/nuwave/lighthouse/src/Cache" inside it.
I found the solution. according to https://lighthouse-php.com/5/custom-directives/getting-started.html#register-directives
When Lighthouse encounters a directive within the schema, it starts looking for a matching class in the following order: 1. User-defined namespaces as configured in config/lighthouse.php, defaults to App\GraphQL\Directives 2. The RegisterDirectiveNamespaces event is dispatched to gather namespaces defined by plugins, extensions or other listeners 3. Lighthouse's built-in directive namespace.
So it did seem like override could be possible, and it was.
I haven't tried first method (App\GraphQL\Directive...) but that probably would work too, I went with the second method the RegisterDirectiveNamespaces event, since I was writing a package.
Make all your directives in the same folder under one namespace eg:
namespace SteveMoretz\Something\GraphQL\Directives;
Now in a service provider (Can be your package's service provider or AppServiceProvider or any service provider you get the idea.) register that namespace your directives are under.
use Illuminate\Contracts\Events\Dispatcher;
use Nuwave\Lighthouse\Events\RegisterDirectiveNamespaces;
class ScoutGraphQLServiceProvider {
public function register(Dispatcher $dispatcher) {
$dispatcher->listen(
RegisterDirectiveNamespaces::class,
static function (): string {
return "SteveMoretz\Something\GraphQL\Directives";
}
);
}
}
That's it so for an example I have overridden the #where directive, first I created a file named as original WhereDirective.php then put these contents in it:
<?php
namespace SteveMoretz\Something\GraphQL\Directives;
use Nuwave\Lighthouse\Scout\ScoutBuilderDirective;
use Nuwave\Lighthouse\Support\Contracts\ArgBuilderDirective;
use Nuwave\Lighthouse\Schema\Directives\BaseDirective;
use Nuwave\Lighthouse\Schema\Directives\WhereDirective as WhereDirectiveOriginal;
use Nuwave\Lighthouse\Support\Contracts\FieldResolver;
class WhereDirective extends WhereDirectiveOriginal
{
public function handleBuilder($builder, $value): object
{
$clause = $this->directiveArgValue('clause', 'where');
// do some other stuff too... my custom logic
return $builder->{$clause}(
$this->directiveArgValue('key', $this->nodeName()),
$this->directiveArgValue('operator', '='),
$value
);
}
}
Now whenever we use #where my custom directive runs instead of the original one, but be careful what you do in this directive don't alter the whole directive try to extend the original and add more options to it, otherwise you would end up confusing yourself later!

Ignore and DoNotValidate not working for AutoMapper 8.1 and .NET Core 2.1

I have developed an azure function in .net core and configured automapper in startup.cs by builder.Services.AddAutoMapper(Assembly.GetAssembly(this.GetType()));
I am trying to create a map between domain class (table in EF core) and a DTO. The domain class has a property RowVersion . I want this property to be ignored while mapping dto to domain.
For this I created a Profile Class and created my custom map but this is not working.
I tried DoNotValidate , but it doesn't seem to work
Startup.cs
builder.Services.AddAutoMapper(Assembly.GetAssembly(this.GetType()));
public class MapperProfile :Profile
{
public MapperProfile()
{
CreateMap<MyDto, MyDomain>().ForMember(i => i.RowVersion, opt => opt.Ignore());
}
}
Service.cs
_mapper.Map<myDomain>(myDtoInstance)
getting the below error
Unmapped members were found. Review the types and members below.
Add a custom mapping expression, ignore, add a custom resolver, or modify the source/destination type
For no matching constructor, add a no-arg ctor, add optional arguments, or map all of the constructor parameters
Unmapped properties:
[5/28/2019 7:00:40 AM] RowVersion
ForMember is used for referencing the destination objects, in your case, MyDto.
Use ForSourceMember instead:
public MapperProfile()
{
CreateMap<MyDto, MyDomain>().ForSourceMember(i => i.RowVersion, opt => opt.Ignore());
}

How to release or distribute an application that uses mikro-orm?

In the configuration I have to specify the paths to .js and .ts files defining entities:
MikroORM.init({
...
entitiesDirs: ["build/entities"],
entitiesDirsTs: ["src/entities"],
});
So, when I will go to release or distribute the application. Will I need distribute the typescript code too? or will I need distribute only the cache generated? or will I need distribute both? or... none?
As of MikroORM v2.2
Now you can work with default metadata provider, it will require entity source files only if you do not provide entity or type options in your decorators (you can use entity callback to use reference to entity class instead of using string name in type, handle for refactoring via IDE like webstorm).
Original answer:
You should ship the typescript code too, and let the cache regenerate on the server - cache would be rebuilt anyway as it checks absolute path to cached entity for invalidation.
You could implement your own cache adapter or metadata provider to get around this, if you don't want to ship the typescript code.
This is how you could implement custom metadata provider that simply throws error when the type option is missing:
import { MetadataProvider, Utils } from 'mikro-orm';
import { EntityMetadata } from 'mikro-orm/dist/decorators';
export class SimpleMetadataProvider extends MetadataProvider {
async loadEntityMetadata(meta: EntityMetadata, name: string): Promise<void> {
// init types and column names
Object.values(meta.properties).forEach(prop => {
if (prop.entity) {
prop.type = Utils.className(prop.entity());
} else if (!prop.type) {
throw new Error(`type is missing for ${meta.name}.${prop.name}`)
}
});
}
}
Then provide this class when initializing:
const orm = await MikroORM.init({
// ...
metadataProvider: SimpleMetadataProvider,
});
The value of type should be JS types, like string/number/Date... You can observe your cached metadata to be sure what values should be there.
Also keep in mind that without TS metadata provider, you will need to specify entity type in #ManyToOne decorator too (either via entity callback, or as a string via type).

Is it possible to configure everything within context?

I am trying to configure Audit.net and define my custom logic for saving logs.
Is there a way to configure included entities within context?
I tried this
`
public ResidentMasterContext(DbContextOptions options) : base(options)
{
AuditDataProvider = new DynamicDataProvider();
Mode = AuditOptionMode.OptIn;
IncludeEntityObjects = true;
EntitySettings = new Dictionary<Type, EfEntitySettings>
{
{typeof(Apartment), new EfEntitySettings()}
};
}
`
but OnScopeSaving is not firing. And when I change mode to OptOut it takes all entities
I guess you are referring to the Audit.NET EntityFramework extension.
if you use OptIn you need to mark the included entities with [AuditInclude] attribute, or use the Include methods of the fluent API. You can check the documentation here.
An example using the fluent API for the EF configuration, to include only the entities User and UserDetail:
Audit.EntityFramework.Configuration.Setup()
.ForContext<ResidentMasterContext>(config => config
.IncludeEntityObjects())
.UseOptIn()
.Include<User>()
.Include<UserDetail>();
An example of the output configuration:
Audit.Core.Configuration.Setup()
.UseDynamicProvider(_ => _.OnInsertAndReplace(auditEvent =>
{
Console.WriteLine(auditEvent.ToJson());
}));

Q: How to extend the routing config in Rebus

I have a rebus config project which is shared for many web api projects
So basically, it looks like
Web api 1 ==> Shared Rebus Config
Web api 2 ==> Shared Rebus Config
Web api 3 ==> Shared Rebus Config
My question is, if I have some messages & handlers in Web api 3 project, how can I configure the routing for them?
My current config:
var autofacContainerAdapter = new AutofacContainerAdapter(container);
return Configure
.With(autofacContainerAdapter)
.Serialization(s => s.UseNewtonsoftJson())
.Routing(r =>
{
r.TypeBased()
.MapAssemblyOf<ProjectA.MessageA>(EnvironmentVariables.ServiceBusQueueName)
.MapAssemblyOf<ProjectB.MessageB>(EnvironmentVariables.ServiceBusQueueName);
})
.Sagas(s =>
{
s.StoreInSqlServer(EnvironmentVariables.ConnectionString, "Saga", "SagaIndex");
})
.Options(o =>
{
o.LogPipeline();
o.EnableDataBus().StoreInBlobStorage(EnvironmentVariables.AzureStorageConnectionString, EnvironmentVariables.BlobStorageContainerName);
o.EnableSagaAuditing().StoreInSqlServer(EnvironmentVariables.ConnectionString, "Snapshots");
})
.Logging(l =>
{
l.Use(new SentryLogFactory());
})
.Transport(t =>
{
t.UseAzureServiceBus(EnvironmentVariables.AzureServiceBusConnectionString, EnvironmentVariables.ServiceBusQueueName).AutomaticallyRenewPeekLock();
})
.Start();
Well... as you have probably already found out, it is not possible to make additional calls to the .Routing(r => r.TypeBased()....) part. Therefore, I can see two fairly easy ways forward:
1: Simply pass additional parameters to your shared configuration method from the outside, e.g. something like this:
var additionalEndpointMappings = new Dictionary<Assembly, string>
{
{ typeof(Whatever).Assembly, "another-queue" }
};
var bus = CreateBus("my-queue", additionalEndpointMappings);
which of course then needs to be handled appropriately in the .Routing(...) configuration callback.
2: Pull out all the common configurations into a new extension method. I almost always use this method myself, because I have found it to be easy to maintain.
First you create a new RebusConfigurer extension method in a shared lib somewhere:
// shared lib
public static class CustomRebusConfigEx
{
public static RebusConfigurer AsServer(this RebusConfigurer configurer, string inputQueueName)
{
return configurer
.Logging(...)
.Transport(...))
.Sagas(...)
.Serialization(...)
.Options(...);
}
}
and then you can call this by going
Configure.With(...)
.AsServer("my-queue")
.Start();
in your endpoints.
3: A combination of (1) and (2) which enables this:
Configure.With(...)
.AsServer("my-queue")
.StandardRouting(r => r.MapAssemblyOf<MessageType>("somewhere-else"))
.Start();
which can end up avoiding repetitive code, still preserving a great deal of flexibility, and actually looking pretty neat :)

Resources