Audit.Net: Process of saving in different dataprovider - audit.net

Is it possible to establish some mechanism so that in case the insertion of the audit fails in the default dataprovider, oracle for example, another data provider is used, for example in a file?
Thanks for the help

There is not a data provider with automatic fallback capabilities, but you can implement a custom data provider.
Say you want the Sql data provider by default, and another data provider as fallback. Inherit from the SQL data provider (SqlDataProvider) and fallback to another DataProvider when a SqlException is thrown:
public class FallbackSqlDataProvider : SqlDataProvider
{
public AuditDataProvider FallbackProvider { get; set; }
public override object InsertEvent(AuditEvent auditEvent)
{
try
{
return base.InsertEvent(auditEvent);
}
catch (SqlException)
{
return FallbackProvider?.InsertEvent(auditEvent);
}
}
public override async Task<object> InsertEventAsync(AuditEvent auditEvent)
{
try
{
return await base.InsertEventAsync(auditEvent);
}
catch (SqlException)
{
return await FallbackProvider?.InsertEventAsync(auditEvent);
}
}
public override void ReplaceEvent(object eventId, AuditEvent auditEvent)
{
try
{
base.ReplaceEvent(eventId, auditEvent);
}
catch (SqlException)
{
FallbackProvider?.ReplaceEvent(eventId, auditEvent);
}
}
public override async Task ReplaceEventAsync(object eventId, AuditEvent auditEvent)
{
try
{
await base.ReplaceEventAsync(eventId, auditEvent);
}
catch (SqlException)
{
await FallbackProvider?.ReplaceEventAsync(eventId, auditEvent);
}
}
}
Then you can set the fallback data provider on FallbackProvider property, for example like this:
var dp = new FallbackSqlDataProvider()
{
ConnectionString = "cnnstring",
TableName = "table",
IdColumnName = "id",
JsonColumnName = "data",
FallbackProvider = new Log4netDataProvider()
{
Logger = LogManager.GetLogger(typeof(Log4netDataProvider)),
LogMessageBuilder = (ev, id) => ev.ToJson()
}
};
Audit.Core.Configuration.Setup()
.UseCustomProvider(dp);
Also check this related issue.

Related

AspNetIdentityDocumentDB and Cross partition query is required but disabled

I have an app that uses CosmosDb as the database and using AspNetIdentityDocument. When I call var result = await _signInManager.PasswordSignInAsync(model.UserName, model.Password, model.RememberMe, false), i get the error Cross partition query is required but disabled. Please set x-ms-documentdb-query-enablecrosspartition to true, specify x-ms-documentdb-partitionkey
void InitializeDocumentClient(DocumentClient client) code attempts to create the container if not there. It works for the creating the container on my CossmosDb emultated store but fails on the Azure store requiring a partition key! My app works on the emulated store!
Program.cs
builder.Services.AddDefaultDocumentClientForIdentity(
builder.Configuration.GetValue<Uri>("DocumentDbClient:EndpointUri"),
builder.Configuration.GetValue<string>("DocumentDbClient:AuthorizationKey"),
afterCreation: InitializeDocumentClient);
builder.Services.AddIdentity<ApplicationUser, DocumentDbIdentityRole>()
.AddDocumentDbStores(options =>
{
options.UserStoreDocumentCollection = "AspNetIdentity";
options.Database = "RNPbooking";
})
.AddDefaultTokenProviders();
void InitializeDocumentClient(DocumentClient client)
{
try
{
var db = client.ReadDatabaseAsync(UriFactory.CreateDatabaseUri("RNPbooking")).Result;
}
catch (AggregateException ae)
{
ae.Handle(ex =>
{
if (ex.GetType() == typeof(DocumentClientException) && ((DocumentClientException)ex).StatusCode == HttpStatusCode.NotFound)
{
var db = client.CreateDatabaseAsync(new Microsoft.Azure.Documents.Database() { Id = "RNPbooking" }).Result;
return true;
}
return false;
});
}
try
{
var collection = client.ReadDocumentCollectionAsync(UriFactory.CreateDocumentCollectionUri("RNPbooking", "AspNetIdentity")).Result;
}
catch (AggregateException ae)
{
ae.Handle(ex =>
{
if (ex.GetType() == typeof(DocumentClientException) && ((DocumentClientException)ex).StatusCode == HttpStatusCode.NotFound)
{
DocumentCollection collection = new DocumentCollection()
{
Id = "AspNetIdentity"
};
collection = client.CreateDocumentCollectionAsync(UriFactory.CreateDatabaseUri("RNPbooking"),collection).Result;
return true;
}
return false;
});
}
}
Controller
[Authorize(Roles = "Admin)]
public class AdminController : Controller
{
private readonly UserManager<ApplicationUser> _userManager;
private readonly SignInManager<ApplicationUser> _signInManager;
public CosmosClient _client;
public AdminController(
UserManager<ApplicationUser> userManager,
SignInManager<ApplicationUser> signInManager,
)
{
_userManager = userManager;
_signInManager = signInManager;
}
You need to fill in CreateDocumentCollectionUri method with FeedOptions object as a parameter
UriFactory.CreateDocumentCollectionUri(DatabaseId, CollectionId),new FeedOptions { EnableCrossPartitionQuery=true})
UPDATED: From the code examples, you seem to be using this library https://github.com/codekoenig/AspNetCore.Identity.DocumentDb, AspNetCore.Identity.DocumentDb.
This error means the library you are using is performing a Document Query in their code at some point, it is not related to the creation of the Database or Collection.
The library code must be using CreateDocumentQuery somewhere, that code is missing:
new FeedOptions { EnableCrossPartitionQuery = true };
If you search their code base, you will see multiple scenarios like that: https://github.com/codekoenig/AspNetCore.Identity.DocumentDb/search?q=CreateDocumentQuery
Because this code is out of your control, you should try and contact the owner to see if this is a fix they can do on their end. The code for the library doesn't seem to have been updated in several years, so maybe this library is not maintained?

Headers are read-only, response has already started

I am trying to catch and format the exception thrown by the resource filter but getting this error. The middleware is working for exceptions thrown from controller level but getting this - "System.InvalidOperationException: Headers are read-only, response has already started" error while trying to write to the response in case of resource level errors.
Code of my Resource Filter:
public class TestingAsyncResourceFilter : IAsyncResourceFilter
{
public async Task OnResourceExecutionAsync(ResourceExecutingContext context, ResourceExecutionDelegate next)
{
Console.WriteLine("Resource filter executing");
var resourceExecutedContext = await next();
Console.WriteLine("Resource filter executed");
if (!resourceExecutedContext.ModelState.IsValid)
{
throw new CustomUPException();
}
}
}
Code of middleware:
public class ResponseFormatterMiddleware : IMiddleware
{
private readonly ILogger<ResponseFormatterMiddleware> _logger;
public ResponseFormatterMiddleware(ILogger<ResponseFormatterMiddleware> logger)
{
_logger = logger;
}
public async Task InvokeAsync(HttpContext context, RequestDelegate next)
{
try
{
Console.WriteLine("Before execution");
await next(context);
Console.WriteLine("After Execution");
}
catch(CustomUPException e)
{
Console.WriteLine("Here we are");
await context.Response.WriteAsJsonAsync(
new ResponseDto()
{
statusCode = e.statusCode,
message = e.message
}); // getting error
}
catch(Exception e)
{
_logger.LogError(e.Message);
context.Response.StatusCode = (int) HttpStatusCode.InternalServerError;
await context.Response.WriteAsJsonAsync(
new ResponseDto()
{
success = false,
message = "Request failed"
});
}
}
}
Code of my controller:
[Route("api/[controller]")]
[ApiController]
public class TestingController : ControllerBase
{
[HttpPost("/resource")]
public async Task<UserDto> testingResource( [FromBody] UserDto dto)
{
if (dto.email.Contains("hell"))
{
throw new CustomUPException(); //working
}
return dto;
}
}
Instead of using resource filter, I have used this strategy for formatting model validation errors as the documentation suggests.
Curious to know more about the raised issue though. Thanks in advance
// Add services to the container.
builder.Services.AddControllers().ConfigureApiBehaviorOptions(
options =>
{
options.InvalidModelStateResponseFactory = context =>
{
if (!context.ModelState.IsValid)
{
var data = new Dictionary<string, string?>();
//My Response formatter
var modelStateDictionary = context.ModelState;
foreach (var key in modelStateDictionary.Keys)
{
var errors = modelStateDictionary[key]?.Errors;
data.TryAdd(key, errors?[0].ErrorMessage);
}
return new ObjectResult(new UniversalResponseDto()
{
data = data,
statusCode = (int)HttpStatusCode.UnprocessableEntity,
sucess = false,
message = "One or more validation error occured"
})
{
StatusCode = (int)HttpStatusCode.UnprocessableEntity,
};
}
return new ObjectResult(context.HttpContext.Response);
};
});

release_mode, Pooling, Max Pool size for InMemory SQLite with FluentNHibernate

I'm having some trouble with Sqlite in memory.
I have a class that has a CPF field - similar to US' SSN. As a bussiness rule, the CPF must be unique in the system.
So I've decided to make a check on the class that has this field. Now maybe there's code smell here: I check with the ORM if this is a Conflicting CPF.
private CPF cpf;
public virtual CPF CPF
{
get { return cpf; }
set
{
if (this.ormCreated) //Do not check if it is loaded from the DB. Otherwise, it loops, generating a StackOverflow exception
{
cpf = value;
}
else
{
this.setNewCpf(value);
}
}
}
private void setNewCpf(CPF newCpf)
{
if (this.cpf == newCpf)
{
return;
}
if (Helper.Orm.IsConflictingCpf(newCpf))
{
throw new ConflictingCpfException();
}
else
{
cpf = newCpf;
}
}
And here is the implementation, on the ORM Helper class.
bool OrmHelper.IsConflictingCpf(CPF cpf)
{
int? cpfNumber = cpf.NumeroSemDV;
if (cpfNumber.HasValue)
{
var teste = findByCpfNumber<Client>(cpf);
return
(
findByCpfNumber<Client>(cpf) != null ||
findByCpfNumber<Adversary>(cpf) != null
);
}
else
{
//CPFSemDV = Nullable
return false;
}
}
private PersonType findByCpfNumber<PersonType> (CPF cpf) where PersonType : PessoaFisica
{
int? cpfNumber = cpf.NumeroSemDV;
using (var session = this.NewSession())
using (var transaction = session.BeginTransaction())
{
try
{
var person = session.Query<PersonType>()
.Where(c => c.CPF.NumeroSemDV == cpfNumber)
.FirstOrDefault<PersonType>();
return person;
}
catch (Exception) { transaction.Rollback(); }
finally
{
session.Close();
}
}
return null;
}
The problem happens in my tests. I'm using FluentNHibernate and In memory SQLite.
protected override FluentConfiguration PersistenceProvider
{
get
{
return Fluently
.Configure()
.Database(
SQLiteConfiguration
.Standard
.InMemory()
.ShowSql()
);
}
}
Here is the failing test.
protected override void Given()
{
base.Given();
var clients = new List<Client>();
Client client1 = new Client("Luiz Angelo Heinzen")
{
Capaz = true,
CPF = new CPF(18743509),
eMail = "lah#furb.br"
};
session.Save(client1);
session.Evict(client1);
}
[Then]
public void Motherfaker()
{
Client fromDb;
var clientsFromDb = session.Query<Client>()
.Where(c => c.eMail == "lah#furb.br");
fromDb = clientsFromDb.FirstOrDefault<Client>();
Assert.AreEqual(fromDb.FullName, "Luiz Angelo Heinzen");
}
The reason it fails? In the beginning it was failing because the table didn't exist. In memory sqlite destroys the schema on each new session. So I changed the code to return the same session on the NewSession(). But now it fails with a NHibernate exception: Session is closed. I've tested and if change the findByCpfNumber from this
private PersonType findByCpfNumber<PersonType> (CPF cpf) where PersonType : PessoaFisica
{
int? cpfNumber = cpf.NumeroSemDV;
using (var session = this.NewSession())
using (var transaction = session.BeginTransaction())
{
try
{
var person = session.Query<PersonType>()
.Where(c => c.CPF.NumeroSemDV == cpfNumber)
.FirstOrDefault<PersonType>();
return person;
}
catch (Exception) { transaction.Rollback(); }
finally
{
session.Close();
}
}
return null;
}
to this
private PersonType findByCpfNumber<PersonType> (CPF cpf) where PersonType : PessoaFisica
{
int? cpfNumber = cpf.NumeroSemDV;
//using (var session = this.NewSession())
var session = this.NewSession();
using (var transaction = session.BeginTransaction())
{
try
{
var person = session.Query<PersonType>()
.Where(c => c.CPF.NumeroSemDV == cpfNumber)
.FirstOrDefault<PersonType>();
return person;
}
catch (Exception) { transaction.Rollback(); }
finally
{
//session.Close();
this.CloseSession(session);
}
}
this.CloseSession(session);
return null;
}
the error doesn't happen anymore. Obviously, I'd have to implement the CloseSession method. It would close the Session on the Production database and it would do nothing if Sqlite is being used.
But I'd rather configure SQLite in someway that it wouldn't dispose the session. I've read here about release_mode, Pooling and Max Pool atributes. But I can't seem to find it in the FluentNHibernate so can't even test to see if it would work. I have the FluentNHibernate cloned and it seems to set the release_mode set to on_close, but that doesn't help.
I've even tried:
public override ISession NewSession()
{
if (this.session == null)
{
if (sessionFactory == null)
{
CreateSessionFactory();
}
this.session = sessionFactory.OpenSession();
}
if (!session.IsOpen)
{
sessionFactory.OpenSession(session.Connection);
session.Connection.Open();
}
return session;
}
But it keeps telling me that the Session is closed. So, anyone has any suggestions on how to approach this?
Or does this so smelly that's beyond salvation?
I hope this is clear enough. And forgive my mistakes: I'm from Brazil and not a native english speaker.
Thanks,
Luiz Angelo.
i would check for uniqueness when creating CPFs in the system and have an additional Unique constraint in the database to enforce that. Then if you set cascading to none for each reference to CPF (default is none) it is not possible to assigne newly created duplicate CPFs to an Entity and save it without exception, so it can't happen accidently.
I had the same problem. What's happening is that in-memory SQLite will drop the entire schema when the connection is closed. If you create a session that you hold on to for all tests, it will retain the structure for all other sessions.
For code and a fuller explanation, check out this answer: Random error when testing with NHibernate on an in-Memory SQLite db

How can i use engine object in my console application

"How can i use engine in my console application"
I shouldn't use the ITemplate-interface and Transform-Method.
I am using Tridion 2011
Could anyone please suggest me.
You can't. The Engine class is part of the TOM.NET and that API is explicitly reserved for use in:
Template Building Blocks
Event Handlers
For all other cases (such as console applications) you should use the Core Service.
There are many good questions (and articles on other web sites) already:
https://stackoverflow.com/search?q=%5Btridion%5D+core+service
http://www.google.com/#q=tridion+core+service
If you get stuck along the way, show us the relevant code+configuration you have and what error message your get (or at what step you are stuck) and we'll try to help from there.
From a console application you should use the Core Service. I wrote a small example using the Core Service to search for items in the content manager.
Console.WriteLine("FullTextQuery:");
var fullTextQuery = Console.ReadLine();
if (String.IsNullOrWhiteSpace(fullTextQuery) || fullTextQuery.Equals(":q", StringComparison.OrdinalIgnoreCase))
{
break;
}
Console.WriteLine("SearchIn IdRef:");
var searchInIdRef = Console.ReadLine();
var queryData = new SearchQueryData
{
FullTextQuery = fullTextQuery,
SearchIn = new LinkToIdentifiableObjectData
{
IdRef = searchInIdRef
}
};
var results = coreServiceClient.GetSearchResults(queryData);
results.ToList().ForEach(result => Console.WriteLine("{0} ({1})", result.Title, result.Id));
Add a reference to Tridion.ContentManager.CoreService.Client to your Visual Studio Project.
Code of the Core Service Client Provider:
public interface ICoreServiceProvider
{
CoreServiceClient GetCoreServiceClient();
}
public class CoreServiceDefaultProvider : ICoreServiceProvider
{
private CoreServiceClient _client;
public CoreServiceClient GetCoreServiceClient()
{
return _client ?? (_client = new CoreServiceClient());
}
}
And the client itself:
public class CoreServiceClient : IDisposable
{
public SessionAwareCoreServiceClient ProxyClient;
private const string DefaultEndpointName = "netTcp_2011";
public CoreServiceClient(string endPointName)
{
if(string.IsNullOrWhiteSpace(endPointName))
{
throw new ArgumentNullException("endPointName", "EndPointName is not specified.");
}
ProxyClient = new SessionAwareCoreServiceClient(endPointName);
}
public CoreServiceClient() : this(DefaultEndpointName) { }
public string GetApiVersionNumber()
{
return ProxyClient.GetApiVersion();
}
public IdentifiableObjectData[] GetSearchResults(SearchQueryData filter)
{
return ProxyClient.GetSearchResults(filter);
}
public IdentifiableObjectData Read(string id)
{
return ProxyClient.Read(id, new ReadOptions());
}
public ApplicationData ReadApplicationData(string subjectId, string applicationId)
{
return ProxyClient.ReadApplicationData(subjectId, applicationId);
}
public void Dispose()
{
if (ProxyClient.State == CommunicationState.Faulted)
{
ProxyClient.Abort();
}
else
{
ProxyClient.Close();
}
}
}
When you want to perform CRUD actions through the core service you can implement the following methods in the client:
public IdentifiableObjectData CreateItem(IdentifiableObjectData data)
{
data = ProxyClient.Create(data, new ReadOptions());
return data;
}
public IdentifiableObjectData UpdateItem(IdentifiableObjectData data)
{
data = ProxyClient.Update(data, new ReadOptions());
return data;
}
public IdentifiableObjectData ReadItem(string id)
{
return ProxyClient.Read(id, new ReadOptions());
}
To construct a data object of e.g. a Component you can implement a Component Builder class that implements a create method that does this for you:
public ComponentData Create(string folderUri, string title, string content)
{
var data = new ComponentData()
{
Id = "tcm:0-0-0",
Title = title,
Content = content,
LocationInfo = new LocationInfo()
};
data.LocationInfo.OrganizationalItem = new LinkToOrganizationalItemData
{
IdRef = folderUri
};
using (CoreServiceClient client = provider.GetCoreServiceClient())
{
data = (ComponentData)client.CreateItem(data);
}
return data;
}
Hope this gets you started.

Storage for DataMappers in ASP.NET WebApplication

In Martin Fowler's "Patterns of Enterprise Application Architecture"
is described approach for organizing DAL like a set of mappers for entities. Each has it's own IdentityMap storing specific entity.
for example in my ASP.NET WebApplication:
//AbstractMapper - superclass for all mappers in DAL
public abstract class AbstractMapper
{
private readonly string _connectionString;
protected string ConnectionString
{
get { return _connectionString; }
}
private readonly DbProviderFactory _dbFactory;
protected DbProviderFactory DBFactory
{
get { return _dbFactory; }
}
#region LoadedObjects (IdentityMap)
protected Hashtable LoadedObjects = new Hashtable();
public void RegisterObject(long id, DomainObject obj)
{
LoadedObjects[id] = obj;
}
public void UnregisterObject(long id)
{
LoadedObjects.Remove(id);
}
#endregion
public AbstractMapper(string connectionString, DbProviderFactory dbFactory)
{
_connectionString = connectionString;
_dbFactory = dbFactory;
}
protected virtual string DBTable
{
get
{
throw new NotImplementedException("database table is not defined in class " + this.GetType());
}
}
protected virtual T Find<T>(long id, IDbTransaction tr = null) where T : DomainObject
{
if (id == 0)
return null;
T result = (T)LoadedObjects[id];
if (result != null)
return result;
IDbConnection cn = GetConnection(tr);
IDbCommand cmd = CreateCommand(GetFindStatement(id), cn, tr);
IDataReader rs = null;
try
{
OpenConnection(cn, tr);
rs = cmd.ExecuteReader(CommandBehavior.SingleRow);
result = (rs.Read()) ? Load<T>(rs) : null;
}
catch (DbException ex)
{
throw new DALException("Error while loading an object by id in class " + this.GetType(), ex);
}
finally
{
CleanUpDBResources(cmd, cn, tr, rs);
}
return result;
}
protected virtual T Load<T>(IDataReader rs) where T : DomainObject
{
long id = GetReaderLong(rs["ID"]);
T result = (T)LoadedObjects[id];
if (result != null)
return result;
result = (T)DoLoad(id, rs);
RegisterObject(id, result);
return result;
}
// another CRUD here ...
}
// Specific Mapper for entity Account
public class AccountMapper : AbstractMapper
{
internal override string DBTable
{
get { return "Account"; }
}
public AccountMapper(string connectionString, DbProviderFactory dbFactory) : base(connectionString, dbFactory) { }
public Account Find(long id)
{
return Find<Account>(id);
}
public override DomainObject DoLoad(long id, IDataReader rs)
{
Account account = new Account(id);
account.Name = GetReaderString(rs["Name"]);
account.Value = GetReaderDecimal(rs["Value"]);
account.CurrencyID = GetReaderLong(rs["CurrencyID"]);
return account;
}
// ...
}
The question is: where to store these mappers? How system services (entities) should call mappers?
I decided to create MapperRegistry containing all mappers. So services can call mappers like:
public class AccountService : DomainService
{
public static Account FindAccount(long accountID)
{
if (accountID > 0)
return MapperRegistry.AccountMapper.Find(accountID);
return null;
}
...
}
But where can I store MapperRegistry instance? I see following variants, but don't like any of them:
MapperRegistry is global for application (Singleton)
Not applicable because of necessity of synchronization in multi-thread ASP.NET application (at least Martin says that only mad can choose this variant)
MapperRegistry per Session
Seems not so good too. All ORMs (NHibernate, LINQ to SQL, EntityFramework) masters advise to use DataContext (NHibernateSession, ObjectContext) per Request and not to store context in Session.
Also in my WebApp almost all requests are AJAX-requests to EntityController.asmx (with attribute ScriptService) returning JSON. And session is not allowed.
MapperRegistry per Request
There are a lot of separate AJAX calls. In this case life cycle of MapperRegistry will be too small. So the data almost always will be retrieved from database, as a result - low performance.
Dear Experts, please help me with architectural solution.

Resources