WCF Transaction with multiple inserts - asp.net

When creating a user, entries are required in multiple tables. I am trying to create a transaction that creates a new entry into one table and then pass the new entityid into the parent table and so on. The error I am getting is
The transaction manager has disabled its support for remote/network
transactions. (Exception from HRESULT: 0x8004D024)
I believe this is caused by creating multiple connections within a single TransactionScope, but I am unsure on what the best/most efficient way of doing this is.
[OperationBehavior(TransactionScopeRequired = true)]
public int CreateUser(CreateUserData createData)
{
// Create a new family group and get the ID
var familyGroupId = createData.FamilyGroupId ?? CreateFamilyGroup();
// Create the APUser and get the Id
var apUserId = CreateAPUser(createData.UserId, familyGroupId);
// Create the institution user and get the Id
var institutionUserId = CreateInsUser(apUserId, createData.AlternateId, createData.InstitutionId);
// Create the investigator group user and return the Id
return AddUserToGroup(createData.InvestigatorGroupId, institutionUserId);
}
This is an example of one of the function calls, all the other ones follow the same format
public int CreateFamilyGroup(string familyGroupName)
{
var familyRepo = _FamilyRepo ?? new FamilyGroupRepository();
var familyGroup = new FamilyGroup() {CreationDate = DateTime.Now};
return familyRepo.AddFamilyGroup(familyGroup);
}
And the repository call for this is as follows
public int AddFamilyGroup(FamilyGroup familyGroup)
{
using (var context = new GameDbContext())
{
var newGroup = context.FamilyGroups.Add(familyGroup);
context.SaveChanges();
return newGroup.FamilyGroupId;
}
}

I believe this is caused by creating multiple connections within a single TransactionScope
Yes, that is the problem. It does not really matter how you avoid that as long you avoid it. A common thing to do is to have one connection and one EF context per WCF request. You need to find a way to pass that EF context along.
The method AddFamilyGroup illustrates a common anti-pattern with EF: You are using EF as a CRUD facility. It's supposed to me more like a live object graph connected to the database. The entire WCF request should share the same EF context. If you move in that direction the problem goes away.

Related

How can I optimize this function get all values in a redis json database?

My function
public IQueryable<T> getAllPositions<T>(RedisDbs redisDbKey)
{
List<T> positions = new List<T>();
List<string> keys = new List<string>();
foreach (var key in _redisServer.Keys((int)redisDbKey))
{
keys.Add(key.ToString());
}
var sportEventRet = _redis.GetDatabase((int)redisDbKey).JsonMultiGetAsync(keys.ToArray());
foreach (var sportEvent in sportEventRet.Result)
{
var redisValue = (RedisValue)sportEvent;
if(!redisValue.IsNull)
{
var positionEntity = JsonConvert.DeserializeObject<T>(redisValue, jsonSerializerSettings);
positions.Add(positionEntity);
}
}
return positions.AsQueryable();
}
Called as
IQueryable<IPosition> union = redisClient.getAllPositions<Position>(RedisDbs.POSITIONDB);
Where Position is a simple model with just a few simple properties. And RedisDbs is just an enum representing an int for a specific database. With both this application and the redisjson instance running locally on a high performance server, it takes two seconds for this function to return a database with 20k json values in it. This is unacceptable for my specific usecase, I need this to be done in the maximum of 1 second, preferably sub 600ms. Are there any optimizations I could make to this?
I'm convinced the problem is with the KEYS command.
Here is what is written about Keys command in redis.io:
Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance
when it is executed against large databases. This command is intended
for debugging and special operations, such as changing your keyspace
layout. Don't use KEYS in your regular application code.
You can save the list of your json keys and then use them in your function instead of calling the keys command.

Handling reads of Cosmos DB container with multiple types?

I'd like to store several different object types in a single Cosmos DB container, as they are all logically grouped and make sense to read together by timestamp to avoid extra HTTP calls.
However, the Cosmos DB client API doesn't seem to provide an easy way of doing the reads with multiple types. The best solution I've found so far is to write your own CosmosSerializer and JsonConverter, but that feels clunky: https://thomaslevesque.com/2019/10/15/handling-type-hierarchies-in-cosmos-db-part-2/
Is there a more graceful way to read items of different types to a shared base class so I can cast them later, or do I have to take the hit?
Thanks!
The way I do this is to create the ItemQueryIterator and FeedResponse objects as dynamic and initially read them untyped so I can inspect a "type" property that tells me what type of object to deserialize into.
In this example I have a single container that contains both my customer data as well as all their sales orders. The code looks like this.
string sql = "SELECT * from c WHERE c.customerId = #customerId";
FeedIterator<dynamic> resultSet = container.GetItemQueryIterator<dynamic>(
new QueryDefinition(sql)
.WithParameter("#customerId", customerId),
requestOptions: new QueryRequestOptions
{
PartitionKey = new PartitionKey(customerId)
});
CustomerV4 customer = new CustomerV4();
List<SalesOrder> orders = new List<SalesOrder>();
while (resultSet.HasMoreResults)
{
//dynamic response. Deserialize into POCO's based upon "type" property
FeedResponse<dynamic> response = await resultSet.ReadNextAsync();
foreach (var item in response)
{
if (item.type == "customer")
{
customer = JsonConvert.DeserializeObject<CustomerV4>(item.ToString());
}
else if (item.type == "salesOrder")
{
orders.Add(JsonConvert.DeserializeObject<SalesOrder>(item.ToString()));
}
}
}
Update:
You do not have to use dynamic types if want to create a "base document" class and then derive from that. Deserialize into the documentBase class, then check the type property check which class to deserialize the payload into.
You can also extend this pattern when you evolve your data models over time with a docVersion property.

Exchange Web Services find item by unique id

I just started using Microsoft Exchange Web Services for the first time. Want I want to be able to do is the following:
Create Meeting
Update Meeting
Cancel/Delete Meeting
These meetings are created in an ASP.NET MVC application and saved into a SQL Server database. I simply wish to integrate this with the on site Exchange Server. So far, I'm able to created my meeting with the following code:
public static Task<string> CreateMeetingAsync(string from, List<string> to, string subject, string body, string location, DateTime begin, DateTime end)
{
var tcs = new TaskCompletionSource<string>();
try
{
ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2013);
service.Credentials = CredentialCache.DefaultNetworkCredentials;
//service.UseDefaultCredentials = true;
// I suspect the Service URL needs to be set from the user email address because this is then used to set the organiser
// of the appointment constructed below. The Organizer is a read-only field that cannot be manually set. (security measure)
service.AutodiscoverUrl(from);
//service.Url = new Uri(WebConfigurationManager.AppSettings["ExchangeServer"]);
Appointment meeting = new Appointment(service);
meeting.Subject = subject;
meeting.Body = "<span style=\"font-family:'Century Gothic'\" >" + body + "</span><br/><br/><br/>";
meeting.Body.BodyType = BodyType.HTML;
meeting.Start = begin;
meeting.End = end;
meeting.Location = location;
meeting.ReminderMinutesBeforeStart = 60;
foreach (string attendee in to)
{
meeting.RequiredAttendees.Add(attendee);
}
meeting.Save(SendInvitationsMode.SendToAllAndSaveCopy);
tcs.TrySetResult(meeting.Id.UniqueId);
}
catch (Exception ex)
{
tcs.TrySetException(ex);
}
return tcs.Task;
}
This successfully creates my meeting, places it into the user's calendar in outlook and sends a meeting request to all attendees. I noticed the following exception when attempting to call meeting.Save(SendInvitationsMode.SendToAllAndSaveCopy); twice:
This operation can't be performed because this service object already has an ID. To update this service object, use the Update() method instead.
I thought: Great, it saves the item in exchange with a unique id. I'll save this ID in my application's database and use it later to edit/cancel the meeting. That is why I return the id: tcs.TrySetResult(meeting.Id.UniqueId);
This is saved nicely into my application's database:
Now, I am attempting to do the next part where I update the meeting, but I cannot find a way to search for the item based on the unique identifier that I'm saving. An example I found on code.msdn uses the service.FindItems() method with a query that searches the subject:
string querystring = "Subject:Lunch";
FindItemsResults<Item> results = service.FindItems(WellKnownFolderName.Calendar, querystring, view);
I don't like this. There could be a chance that the user created a meeting outside of my application that coincidentally has the same subject, and here come's my application and cancel's it. I tried to determine wether it's possible to use the unique id in the query string, but this does not seem possible.
I did notice on the above query string page that the last property you can search on is (property is not specified) that searches in "all word phase properties.". I tried thus simply putting the id into the query, but this returns no results:
FindItemsResults<Item> results = service.FindItems(WellKnownFolderName.Calendar, "AAMkADJhZDQzZWFmLWYxYTktNGI1Yi1iZTA5LWVmOTE3MmJiMGIxZgBGAAAAAAAqhOzrXRdvRoA6yPu9S/XnBwDXvBDBMebkTqUWix3HxZrnAAAA2LKLAAB5iS34oLxkSJIUht/+h9J1AAFlEVLAAAA=", view);
Use the Appointment.Bind static function, providing a service object and the ItemId saved in your database. Be aware with meeting workflow (invite, accept, reject) can re-create a meeting on the same calendar with a new ItemId. But if you are just looking at the meeting you make on your own calendar, you should be OK.

Can this code be made more generic?

I'm creating website functionality in c#/Asp.Net 4.0.
I have a function that queries my database and returns a list of loan providers ("lenders").
Once I have built my list, I then iterate through the list and perform an HttpPost for each of them. Each lender has very specific and unique data requirements, so I have a class for each lender, that inherits an interface.
MrLenderRequest : IPingtreeRequest
When I loop through the list, I need to map the data to the class. Currently I do it in this very ungeneric way, calling this code from within the loop:
IPingtreeRequest GetLenderRequest(string lender)
{
IPingtreeRequest lenderRequest = null;
switch (lender)
{
case "MrLender":
lenderRequest = new MrLenderRequest(_data);
break;
default:
lender.ThrowCaseNotHandled();
break;
}
return lenderRequest;
}
This is ok, when you have 4 or 5, but not if there are 50 or more. I wondered if there was a more elegant/generic way of mapping the class.
You can use Type.GetType and then use Activator to instantiate the object.
string lender = "MrLender";
var lenderType = Type.GetType(lender + "Request"); // Include your namespaces
IPingtreeRequest lenderRequest = (IPingtreeRequest)Activator.CreateInstance(lenderType);

How do I create a unit test that updates a record into database in asp.net

How do I create a unit test that updates a record into database in asp.net
While technically we don't call this a 'unit test', but an 'integration test' (as Oded explained), you can do this by using a unit testing framework such as MSTest (part of Visual Studio 2008/2010 professional) or one of the free available unit testing frameworks, such as NUnit.
However, testing an ASP.NET web project is usually pretty hard, especially when you've put all you logic inside web pages. Best thing to do is to extract all your business logic to a separate layer (usually a separate project within your solution) and call that logic from within your web pages. But perhaps you’ve already got this separation, which would be great.
This way you can also call this logic from within your tests. For integration tests, it is best to have a separate test database. A test database must contain a known (and stable) set of data (or be completely empty). Do not use a copy of your production database, because when data changes, your tests might suddenly fail. Also you should make sure that all changes in the database, made by an integration test, should be rolled back. Otherwise, the data in your test database is constantly changing, which could cause your tests to suddenly fail.
I always use the TransactionScope in my integration tests (and never in my production code). This ensures that all data will be rolled back. Here is an example of what such an integration test might look like, while using MSTest:
[TestClass]
public class CustomerMovedCommandTests
{
// This test test whether the Execute method of the
// CustomerMovedCommand class in the business layer
// does the expected changes in the database.
[TestMethod]
public void Execute_WithValidAddress_Succeeds()
{
using (new TransactionScope())
{
// Arrange
int custId = 100;
using (var db = new ContextFactory.CreateContext())
{
// Insert customer 100 into test database.
db.Customers.InsertOnSubmit(new Customer()
{
Id = custId, City = "London", Country = "UK"
});
db.SubmitChanges();
}
string expectedCity = "New York";
string expectedCountry = "USA";
var command = new CustomerMovedCommand();
command.CustomerId = custId;
command.NewAddress = new Address()
{
City = expectedCity, Country = expectedCountry
};
// Act
command.Execute();
// Assert
using (var db = new ContextFactory.CreateContext())
{
var c = db.Customers.Single(c => c.Id == custId);
Assert.AreEqual(expectedCity, c.City);
Assert.AreEqual(expectedCountry, c.Country);
}
} // Dispose rolls back everything.
}
}
I hope this helps, but next time, please be a little more specific in your question.

Resources