I am following Hasura basic tutorial on creating a todo app https://hasura.io/learn/graphql/hasura-advanced/introduction/ and want to extend it and have few additional operations, but don't seem to be able. Setup is as in the tutorial - you have Tasks table with title, description, authorId, isComplete, isPublic column. Table permissions are setup as in the tutorial, so a user can only select their own or public tasks. They can also update only their own tasks. Operations I want to add:
Query only public tasks that are NOT theirs (additionally, inverse also - only theirs without public ones).
Mutate public tasks to complete that are not theirs (update isComplete without having permissions to other columns).
I could create views for the first case, but it seems too much of an effort for such a simple logic. I think both cases could simply be done with access to Request Header (x-hasura-user-id) like so:
query PublicTasksOnly {
tasks(where: {isPublic: {_eq: true}, authorId: {_neq: x-hasura-user-id}}) {
description
isComplete
title
}
}
But it seems that this is not possible. Any ideas/suggestions how to achieve this?
To my knowledge I do not think it is possible to reference http headers in your graphql queries. Have you tried passing the userId as a variable to the query? Something like the following:
query PublicTasksOnly($userId: String!) {
tasks(where: {isPublic: {_eq: true}, authorId: {_neq: $userId}}) {
description
isComplete
title
}
}
I am not quite sure what you want to achieve but if your problem gets solved by adding this header x-hasura-user-id then I can help you out.
You can copy the graphql endpoint from hasura console and hit simple http request to that endpoint with query and it's variables in request body. Sharing sample code here using http library axios:
import axios from 'axios';
axios({
method: 'post',
url: 'https://your-hasura-project-url.hasura.app/v1/graphql',
headers: { 'x-hasura-user-id': '< Your user id >' },
data: {
query: `query PublicTasksOnly {
tasks(where: {isPublic: {_eq: true}}) {
description
isComplete
title
}
}`,
variables: { userId: 'abc-xyz' }
}
})
This should solve your issue.
I have written two sample apps, one using Spring Webflux with Reactive Mongo driver and another using Spring MVC and the non-reactive Mongo driver. I have noticed huge performance differences between the two apps. The reactive one always has a response time more than 3 times higher compared to the MVC one.
Both apps connect to the exact same mongo instance ran via docker:
mongo:
image: mongo
restart: always
env_file:
- ./env/mongo.env //just username and pass are set here, nothing else
ports:
- 27017:27017
volumes:
- mongodb:/data/db
- mongoconfigdb:/data/configdb
Details about the implementations of the both apps below:
application.yml MVC and REACTIVE - These are identical in both apps
spring:
data:
mongodb:
uri: mongodb://admin:nimda#localhost:27017
database:
data-name
auto-index-creation: true
application:
name: data
They both save the following model in mongo:
#Document(collection = "data")
public class Data {
#Id
private String id;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
}
MVC App code:
Controller: - basic rest controller
#RestController
public class BlockingController {
private final BlockingMongoRepository blockingMongoRepository;
public BlockingController(#Qualifier("blockingMongoRepository") BlockingMongoRepository blockingMongoRepository) {
this.blockingMongoRepository = blockingMongoRepository;
}
#RequestMapping(value = "/get", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public List<Data> getAllData() {
return blockingMongoRepository.findAll();
}
}
Repository: - basic mongo repo
#Repository(value = "blockingMongoRepository")
public interface BlockingMongoRepository extends MongoRepository<Data, String> {
}
Nothing fancy about the app, just retrieve all the data that is in the database.
build.gradle:
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'
implementation 'org.springframework.boot:spring-boot-starter-web'
}
Reactive App code
Controller: - just retrieves all the data in mongo
#RestController
public class ReactiveController {
private final DataReactiveMongoRepository dataReactiveMongoRepository;
public ReactiveController(#Qualifier("dataReactiveMongoRepository") DataReactiveMongoRepository dataReactiveMongoRepository) {
this.dataReactiveMongoRepository = dataReactiveMongoRepository;
}
#RequestMapping(value = "/get", method = RequestMethod.GET, produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<Data> getAllData() {
return dataReactiveMongoRepository.findAll();
}
}
Repository:
#Repository(value = "dataReactiveMongoRepository")
public interface DataReactiveMongoRepository extends ReactiveMongoRepository<Data, String> {
}
build.gradle: - get reactive counterparts
dependencies {
implementation 'org.springframework.boot:spring-boot-starter-data-mongodb-reactive'
implementation 'org.springframework.boot:spring-boot-starter-webflux'
}
I have added 45000 data instances in mongo to test the difference between the implementations. On average, the mvc app returns them all in ~200-300ms while the reactive implementation returns them in ~2 seconds (2.1 seconds on average from my tests).
Regarding the reactive controller, I have also tried producing x-ndjson and application.json + collectList on the flux to return a list (same result, no performance improvement). When retrieving each item separately with x-ndjson the performance was much much worse.
The #EnableReactiveMongoRepositories is not needed since it's automatically added..i have also tried adding it manually but there is no difference.
From what I've been reading regarding Webflux + Mongo, i expected the performance to be somewhat similar ... maybe reactive should be a bit slower but the difference is huge and seems like something is misconfigured.
I have also played around and tried to publish/subscribe on the bounded elastic thread pool, but it does not seem to make any difference.
I have also ran a jMeter load test on both apps from another machine and the reactive implementation, in general, is ~3 times slower than the mvc one.
Whenever the data is kept in memory (in memory repo instead of mongo - nothing else changed) the reactive implementation performance is somewhat on par with the mvc one.
I have noticed the latency of the response is higher using reactive but the overall response time is around the same values.
Reactive mongo client metadata:
MongoClient with metadata {"driver": {"name": "mongo-java-driver|reactive-streams|spring-boot", "version": "4.6.1"}, "os": {"type": "Darwin", "name": "Mac OS X", "architecture": "x86_64", "version": "12.4"}, "platform": "Java/Eclipse Adoptium/17.0.4.1+1"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=null, streamFactoryFactory=NettyStreamFactoryFactory{eventLoopGroup=io.netty.channel.nio.NioEventLoopGroup#6cd64b3f, socketChannelClass=class io.netty.channel.socket.nio.NioSocketChannel, allocator=PooledByteBufAllocator(directByDefault: true), sslContext=null}, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider#51b01550]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=JAVA_LEGACY, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
Non-reactive mongo client metadata:
MongoClient with metadata {"driver": {"name": "mongo-java-driver|sync|spring-boot", "version": "4.6.1"}, "os": {"type": "Darwin", "name": "Mac OS X", "architecture": "x86_64", "version": "12.4"}, "platform": "Java/Eclipse Adoptium/17.0.4.1+1"} created with settings MongoClientSettings{readPreference=primary, writeConcern=WriteConcern{w=null, wTimeout=null ms, journal=null}, retryWrites=true, retryReads=true, readConcern=ReadConcern{level=null}, credential=MongoCredential{mechanism=null, userName='admin', source='admin', password=<hidden>, mechanismProperties=<hidden>}, streamFactoryFactory=null, commandListeners=[], codecRegistry=ProvidersCodecRegistry{codecProviders=[ValueCodecProvider{}, BsonValueCodecProvider{}, DBRefCodecProvider{}, DBObjectCodecProvider{}, DocumentCodecProvider{}, IterableCodecProvider{}, MapCodecProvider{}, GeoJsonCodecProvider{}, GridFSFileCodecProvider{}, Jsr310CodecProvider{}, JsonObjectCodecProvider{}, BsonCodecProvider{}, EnumCodecProvider{}, com.mongodb.Jep395RecordCodecProvider#329548d0]}, clusterSettings={hosts=[localhost:27017], srvServiceName=mongodb, mode=SINGLE, requiredClusterType=UNKNOWN, requiredReplicaSetName='null', serverSelector='null', clusterListeners='[]', serverSelectionTimeout='30000 ms', localThreshold='30000 ms'}, socketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=0, receiveBufferSize=0, sendBufferSize=0}, heartbeatSocketSettings=SocketSettings{connectTimeoutMS=10000, readTimeoutMS=10000, receiveBufferSize=0, sendBufferSize=0}, connectionPoolSettings=ConnectionPoolSettings{maxSize=100, minSize=0, maxWaitTimeMS=120000, maxConnectionLifeTimeMS=0, maxConnectionIdleTimeMS=0, maintenanceInitialDelayMS=0, maintenanceFrequencyMS=60000, connectionPoolListeners=[], maxConnecting=2}, serverSettings=ServerSettings{heartbeatFrequencyMS=10000, minHeartbeatFrequencyMS=500, serverListeners='[]', serverMonitorListeners='[]'}, sslSettings=SslSettings{enabled=false, invalidHostNameAllowed=false, context=null}, applicationName='null', compressorList=[], uuidRepresentation=JAVA_LEGACY, serverApi=null, autoEncryptionSettings=null, contextProvider=null}
OS: MacOS Monterey 12.6
Spring boot: 2.7.3
Java: Java/Eclipse Adoptium/17.0.4.1+1
Only one app was up during the tests..i have not ran a test on an app with the other one started.
Additionally i tried connecting to a mongo Atlas instance (Free) and I have noticed the same behaviour .. but this test is not really reliable.
My question is: why is this happening?
Are there some additional configs needed to leverage the power of webflux implementation?I could not find anything helpful in the docs.
Any info would be appreciated. Thank you!
UPDATE
I have noticed this behaviour only while using ReactiveMongoTemplate or ReactiveMongoRepository. If I use MongoClient (gradle dependency : 'org.mongodb:mongodb-driver-reactivestreams', which is also a dependency of spring-boot-starter-data-mongodb-reactive) the performance is much better than the mvc one, ~2.3 times the throughput for my tests.
Using ReactiveMongoTemplate or ReactiveMongoRepository blockhound instantly catches blocking calls, something which does not happen while using MongoClient. It seems the performance hit comes from blocking the event loop by spring mongo starter, though i would assume that would decrease throughput and not single request response time.
I need to switch the Symfony cache adapter depending on ENV conditions. Like if some variable is set, use "cache.adapter.apcu" or use "cache.adapter.filesystem" otherwise.
Is it possible somehow? The documentation is not really helpful with it.
P.S.: It is not possible for us to do this via the creation of a whole new environment
Here is a basic example for a CacheAdapter which has adapters fed into it and then picking one based on a parameter (or alternatively envvar):
<?php
namespace App\Cache;
use Psr\Cache\CacheItemInterface;
use Psr\Cache\InvalidArgumentException;
use Psr\Container\ContainerInterface;
use Symfony\Component\Cache\Adapter\AdapterInterface;
use Symfony\Component\Cache\CacheItem;
use Symfony\Contracts\Service\ServiceSubscriberInterface;
use Symfony\Contracts\Service\ServiceSubscriberTrait;
class EnvironmentAwareCacheAdapter implements AdapterInterface, ServiceSubscriberInterface
{
use ServiceSubscriberTrait;
private string $environment;
public function __construct(string $environment)
{
$this->environment = $environment;
}
public function getItem($key)
{
return $this->container->get($this->environment)->getItem($key);
}
public function getItems(array $keys = [])
{
return $this->container->get($this->environment)->getItems($key);
}
// ...
}
This is how you would configure it:
services:
App\Cache\EnvironmentAwareCacheAdapter:
arguments:
$environment: '%kernel.environment%'
tags:
- { name: 'container.service_subscriber', key: 'dev', id: 'cache.app' }
- { name: 'container.service_subscriber', key: 'prod', id: 'cache.system' }
It's not the most elegant solution and is missing error handling and possibly a fallback. Basically, by adding tags with an appropriately named key and the alias to an existing cache as id, you can then refer to that cache with the key in your own adapter. So, depending on your environment you will pick either one. You can replace the key and the constructor argument with anything else you like. I hope that helps.
It seems like you can not set up your cache configuration to use a environment variable like so:
framework:
cache:
app: %env(resolve:CACHE_ADAPTER)%
It is the constraint of FrameworkBundle that provides the cache service. And this constraint will not be "fixed" (Using environment variables at compile time #25173).
To make it possible you need to make your own cache provider that can just pass all arguments to the needed cache provider. You will have access to environment variables at runtime and so you can use it as a proxy that knows what provider to use.
In the configuration I have to specify the paths to .js and .ts files defining entities:
MikroORM.init({
...
entitiesDirs: ["build/entities"],
entitiesDirsTs: ["src/entities"],
});
So, when I will go to release or distribute the application. Will I need distribute the typescript code too? or will I need distribute only the cache generated? or will I need distribute both? or... none?
As of MikroORM v2.2
Now you can work with default metadata provider, it will require entity source files only if you do not provide entity or type options in your decorators (you can use entity callback to use reference to entity class instead of using string name in type, handle for refactoring via IDE like webstorm).
Original answer:
You should ship the typescript code too, and let the cache regenerate on the server - cache would be rebuilt anyway as it checks absolute path to cached entity for invalidation.
You could implement your own cache adapter or metadata provider to get around this, if you don't want to ship the typescript code.
This is how you could implement custom metadata provider that simply throws error when the type option is missing:
import { MetadataProvider, Utils } from 'mikro-orm';
import { EntityMetadata } from 'mikro-orm/dist/decorators';
export class SimpleMetadataProvider extends MetadataProvider {
async loadEntityMetadata(meta: EntityMetadata, name: string): Promise<void> {
// init types and column names
Object.values(meta.properties).forEach(prop => {
if (prop.entity) {
prop.type = Utils.className(prop.entity());
} else if (!prop.type) {
throw new Error(`type is missing for ${meta.name}.${prop.name}`)
}
});
}
}
Then provide this class when initializing:
const orm = await MikroORM.init({
// ...
metadataProvider: SimpleMetadataProvider,
});
The value of type should be JS types, like string/number/Date... You can observe your cached metadata to be sure what values should be there.
Also keep in mind that without TS metadata provider, you will need to specify entity type in #ManyToOne decorator too (either via entity callback, or as a string via type).
I am trying to configure Audit.net and define my custom logic for saving logs.
Is there a way to configure included entities within context?
I tried this
`
public ResidentMasterContext(DbContextOptions options) : base(options)
{
AuditDataProvider = new DynamicDataProvider();
Mode = AuditOptionMode.OptIn;
IncludeEntityObjects = true;
EntitySettings = new Dictionary<Type, EfEntitySettings>
{
{typeof(Apartment), new EfEntitySettings()}
};
}
`
but OnScopeSaving is not firing. And when I change mode to OptOut it takes all entities
I guess you are referring to the Audit.NET EntityFramework extension.
if you use OptIn you need to mark the included entities with [AuditInclude] attribute, or use the Include methods of the fluent API. You can check the documentation here.
An example using the fluent API for the EF configuration, to include only the entities User and UserDetail:
Audit.EntityFramework.Configuration.Setup()
.ForContext<ResidentMasterContext>(config => config
.IncludeEntityObjects())
.UseOptIn()
.Include<User>()
.Include<UserDetail>();
An example of the output configuration:
Audit.Core.Configuration.Setup()
.UseDynamicProvider(_ => _.OnInsertAndReplace(auditEvent =>
{
Console.WriteLine(auditEvent.ToJson());
}));