Dynamically adding key/values as children to a JSON.NET JObject - json.net

I am currently using this ugly code to add a bunch of key/value pairs (kept in a Dictionairy) to a JObject. It does not add them as children, but as siblings. Well the code works, but it would be cleaner if they were added as child nodes. (the dynamic object e, holds the JObject)
public void trigger(dynamic e ,Pairs extras)
{
if (Post != null)
{
foreach (KeyValuePair<string, object> entry in extras)
{
Newtonsoft.Json.Linq.JValue val = new Newtonsoft.Json.Linq.JValue(entry.Value);
e.Add(entry.Key, val);
}
Post(this, e);
}
}

For my particular needs, (sending dynamic object via SignalR which uses JSON.NET to serialize). I have found a simpler solution.
My issue was that I had a dymanic sealed class, that JSON.NET was creating a JTOKEN for, and then I was trying to add more data. When I did the e.Add(string, JTOKEN) it was creating a second child tree. That was ugly.
My solution can now be found at:
can one convert a dynamic object to an ExpandoObject (c#)

Related

How do I pass object (ObjectProxy) from Flex back to .NET WebService?

So, there are a wealth of Flex articles online about how to handle a .NET WebMethod that returns a DataSet or DataTable. Here is an example:
Handling web service results that contain .NET DataSets or DataTables
So, I know how to use result.Tables.<tablename>.Rows and the like. But what I cannot seem to figure out or find online is how to go the other direction - a method to pass objects or tables back to the .NET Webservice from Flex, without stooping to passing XML as a string, or making huge web service methods that have one parameter for each property/column of the object being stored. Surely others, smarter than I, have tackled this issue.
I am using ASP.NET 2.0 Typed DataSets, and it would be really nice if I could just pass one object or array of objects from Flex to the web service, populate my Typed DataTable, and do an Update() through the corresponding typed TableAdapter. My dream would be a [WebMethod] something like one of these:
public void SaveObject(TypedDataTable objToSave) { ... }
public void SaveObject(TypedDataSet objToSave) { ... }
I've had the typed datatables saving to the database, I know how to do that part and even a few tricks, but we had XML being passed back-and-forth as a string - eww. I'm trying to get to a more object-based approach.
The best object based approach is AMF. I assume its probably a bit late in your your development cycle to change your integration layer, but otherwise I dont know of a way to get around marshalling your object(s) back into XML or separating them out into their primitive components.
For .NET implementations of AMF check out:
FlourineFX(FOSS)
WebORB for .NET
Its amazing how easy things become once AMF is used, for example using the Mate MVC framework and an AMF call passing a complex object to the server looks something like this:
<mate:RemoteObjectInvoker instance="yourWebservice" method="saveComplexObject" showBusyCursor="true" >
<mate:resultHandlers>
<mate:CallBack method="saveComplexObjectSuccess" arguments="{[resultObject]}" />
</mate:resultHandlers>
<mate:faultHandlers>
<mate:MethodInvoker generator="{DataManager}" method="presentFault" arguments="{fault}" />
</mate:faultHandlers>
</mate:RemoteObjectInvoker>
With result and fault handlers being optional.
The direction I ended up going was close to what I hoped was possible, but is "hack-ish" enough that I would consider SuperSaiyen's suggestion to use AMF/ORM a better solution for new/greenfield projects.
For sake of example/discussion, let's say I am working with a Person table in a database, and have a typed DataSet called PeopleDataSet that has PersonTableAdapter and PersonDataTable with it.
READ would look like this in .NET web service:
[WebMethod]
public PeopleDataSet.PersonDataTable GetAllPeople() {
var adapter = new PersonTableAdapter();
return adapter.GetData();
}
... which in Flex would give you a result Object that you can use like this:
// FLEX (AS3)
something.dataProvider = result.Tables.Person.Rows;
Check out the link I put in the question for more details on how Flex handles that.
CREATE/UPDATE - This is the part I had to figure out, and why I asked this question. The Flex first this time:
// FLEX (AS3)
var person:Object = {
PersonID: -1, // -1 for CREATE, actual ID for UPDATE
FirstName: "John",
LastName: "Smith",
BirthDate: "07/19/1983",
CreationDate: "1997-07-16T19:20+01:00" // need W3C DTF for Date WITH Time
};
_pplWebSvcInstance.SavePerson(person); // do the web method call
(For handling those W3C datetimes, see How to parse an ISO formatted date in Flex (AS3)?)
On the .NET web service side then, the trick was figuring out the correct Type on the web method's parameter. If you go with just Object, then step into a call with a debugger, you'll see .NET figures it is a XmlNode[]. Here is what I figured out to do:
[WebMethod]
public int SavePerson(PeopleDataSet p) {
// Now 'p' will be a PeopleDataSet with a Table called 'p' that has our data
// row(s) (just row, in this case) as string columns in random order.
// It WILL NOT WORK to use PeopleDataSet.PersonDataTable as the type for the
// parameter, that will always result in an empty table. That is why the
// LoadFlexDataTable utility method below is necessary.
var adapter = new PersonTableAdapter();
var tbl = new PeopleDataSet.PersonDataTable();
tbl.LoadFlexDataTable(p.Tables[0]); // see below
// the rest of this might be familiar territory for working with DataSets
PeopleDataSet.PersonRow row = tbl.FirstOrDefault();
if (row != null) {
if (row.PersonID > 0) { // doing UPDATE
row.AcceptChanges();
row.SetModified();
}
else { // doing CREATE
row.CreationDate = DateTime.UtcNow; // set defaults here
row.IsDeleted = false;
}
adapter.Update(row); // database call
return row.PersonID;
}
return -1;
}
Now, the kluge utility method you saw called above. I did it as extension method, that is optional:
// for getting the Un-Typed datatable Flex gives us into our Typed DataTable
public static void LoadFlexDataTable(this DataTable tbl, DataTable flexDataTable)
{
tbl.BeginLoadData();
tbl.Load(flexDataTable.CreateDataReader(), LoadOption.OverwriteChanges);
tbl.EndLoadData();
// Probably a bug, but all of the ampersand (&) in string columns will be
// unecessarily escaped (&) - kluge to fix it.
foreach (DataRow row in tbl.Rows)
{
row.SetAdded(); // default to "Added" state for each row
foreach (DataColumn col in tbl.Columns) // fix & to & on string columns
{
if (col.DataType == typeof(String) && !row.IsNull(col))
row[col] = (row[col] as string).Replace("&", "&");
}
}
}

Entity Framework and MVC 3: The relationship could not be changed because one or more of the foreign-key properties is non-nullable

I have been trying to use one View for updating an object and all its child collections (based on one-to-many relationships in an SQL Server database with an Entity Framework model).
It was suggested I should use AutoMapper, and I tried that and got it to work. (see Trying to use AutoMapper for model with child collections, getting null error in Asp.Net MVC 3 ).
But the solution is really hard to maintain. And when I try the simple one I had to begin with, using an entity object directly as the model (a "Consultant" object, the parent of all the child collections), I am able to get all the correct changed data back in the POST, and I can use UpdateModel to get them, including child collections. Simple. Granted, UpdateModel only worked after creating a custom model binder from a tip here at SO:
From my custom model binder:
public override object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext)
{
bindingContext.ModelMetadata.ConvertEmptyStringToNull = false;
return base.BindModel(controllerContext, bindingContext);
}
protected override void SetProperty(ControllerContext controllerContext, ModelBindingContext bindingContext, PropertyDescriptor propertyDescriptor, object value)
{
ModelMetadata propertyMetadata = bindingContext.PropertyMetadata[propertyDescriptor.Name];
propertyMetadata.Model = value;
string modelStateKey = CreateSubPropertyName(bindingContext.ModelName, propertyMetadata.PropertyName);
// Try to set a value into the property unless we know it will fail (read-only
// properties and null values with non-nullable types)
if (!propertyDescriptor.IsReadOnly)
{
try
{
if (value == null)
{
propertyDescriptor.SetValue(bindingContext.Model, value);
}
else
{
Type valueType = value.GetType();
if (valueType.IsGenericType && valueType.GetGenericTypeDefinition() == typeof(EntityCollection<>))
{
IListSource ls = (IListSource)propertyDescriptor.GetValue(bindingContext.Model);
IList list = ls.GetList();
foreach (var item in (IEnumerable)value)
{
list.Add(item);
}
}
else
{
propertyDescriptor.SetValue(bindingContext.Model, value);
}
}
}
catch (Exception ex)
{
// Only add if we're not already invalid
if (bindingContext.ModelState.IsValidField(modelStateKey))
{
bindingContext.ModelState.AddModelError(modelStateKey, ex);
}
}
}
}
Here's my simple Edit POST method:
[HttpPost]
[ValidateInput(false)] //To allow HTML in description box
public ActionResult Edit(int id, FormCollection collection)
{
Consultant consultant = _repository.GetConsultant(id);
UpdateModel(consultant);
_repository.Save();
return RedirectToAction("Index");
}
But after that UpdateModel worked. The problem is, at the next stage, when trying to call SaveChanges on the context, that fails. I'm getting this error:
The operation failed: The relationship
could not be changed because one or
more of the foreign-key properties is
non-nullable. When a change is made to
a relationship, the related
foreign-key property is set to a null
value. If the foreign-key does not
support null values, a new
relationship must be defined, the
foreign-key property must be assigned
another non-null value, or the
unrelated object must be deleted.
I don't understand what is wrong. I'm seeing all the correct values in the Consultant object posted, I just can't save it to database. The route of AutoMapper in this case (although an interesting tool) is not working well, it's complicating my code immensely and making the application, which should be rather simple, a nightmare to maintain.
Can anyone offer any insight into why I'm getting this error and how to overcome it?
UPDATE:
Reading some posts here, I found one that seemed slightly related: How to update model in the database, from asp.net MVC2, using Entity Framework? . I don't know if it relates to this, but when I inspected the Consultant object after POST it seems this object itself has entitykey, but the individual items in a collection do not (EntityKeySet = null). Each item however does have the correct id. I don't pretend to understand any of this with the EntityKey, so please explain if it has any bearings on my issue, and if so, how to resolve it...
UPDATE 2:
I thought of something that might have something to do with my problems: The View is using a technique described by Steven Sanderson (see http://blog.stevensanderson.com/2010/01/28/editing-a-variable-length-list-aspnet-mvc-2-style/ ), and when debugging it seems to me as if UpdateModel has trouble matching the items in a collection in the View with the ones in the actual Consultant object. I'm wondering if this has to do with the indexing in this technique. Here's the helper from that code (I can't follow it very well myself, but it uses a Guid to create indexes, which might be the problem):
public static class HtmlPrefixScopeExtensions
{
private const string idsToReuseKey = "__htmlPrefixScopeExtensions_IdsToReuse_";
public static IDisposable BeginCollectionItem(this HtmlHelper html, string collectionName)
{
var idsToReuse = GetIdsToReuse(html.ViewContext.HttpContext, collectionName);
string itemIndex = idsToReuse.Count > 0 ? idsToReuse.Dequeue() : Guid.NewGuid().ToString();
// autocomplete="off" is needed to work around a very annoying Chrome behaviour whereby it reuses old values after the user clicks "Back", which causes the xyz.index and xyz[...] values to get out of sync.
html.ViewContext.Writer.WriteLine(string.Format("<input type=\"hidden\" name=\"{0}.index\" autocomplete=\"off\" value=\"{1}\" />", collectionName, html.Encode(itemIndex)));
return BeginHtmlFieldPrefixScope(html, string.Format("{0}[{1}]", collectionName, itemIndex));
}
public static IDisposable BeginHtmlFieldPrefixScope(this HtmlHelper html, string htmlFieldPrefix)
{
return new HtmlFieldPrefixScope(html.ViewData.TemplateInfo, htmlFieldPrefix);
}
private static Queue<string> GetIdsToReuse(HttpContextBase httpContext, string collectionName)
{
// We need to use the same sequence of IDs following a server-side validation failure,
// otherwise the framework won't render the validation error messages next to each item.
string key = idsToReuseKey + collectionName;
var queue = (Queue<string>)httpContext.Items[key];
if (queue == null)
{
httpContext.Items[key] = queue = new Queue<string>();
var previouslyUsedIds = httpContext.Request[collectionName + ".index"];
if (!string.IsNullOrEmpty(previouslyUsedIds))
foreach (string previouslyUsedId in previouslyUsedIds.Split(','))
queue.Enqueue(previouslyUsedId);
}
return queue;
}
private class HtmlFieldPrefixScope : IDisposable
{
private readonly TemplateInfo templateInfo;
private readonly string previousHtmlFieldPrefix;
public HtmlFieldPrefixScope(TemplateInfo templateInfo, string htmlFieldPrefix)
{
this.templateInfo = templateInfo;
previousHtmlFieldPrefix = templateInfo.HtmlFieldPrefix;
templateInfo.HtmlFieldPrefix = htmlFieldPrefix;
}
public void Dispose()
{
templateInfo.HtmlFieldPrefix = previousHtmlFieldPrefix;
}
}
}
But then again, I wouldn't have thought this should be the problem since the hidden input contains the id in the value attribute, and I thought UpdateModel just looked at the name of the field to get Programs (the collection) and Name (the property), and then the value to the the id...? And then again there's seems to be some mismatch during update. Anyway, here's the generated html from FireBug also:
<td>
<input type="hidden" value="1" name="Programs[cabac7d3-855f-45d8-81b8-c31fcaa8bd3d].Id" id="Programs_cabac7d3-855f-45d8-81b8-c31fcaa8bd3d__Id" data-val-required="The Id field is required." data-val-number="The field Id must be a number." data-val="true">
<input type="text" value="Visual Studio" name="Programs[cabac7d3-855f-45d8-81b8-c31fcaa8bd3d].Name" id="Programs_cabac7d3-855f-45d8-81b8-c31fcaa8bd3d__Name">
<span data-valmsg-replace="true" data-valmsg-for="Programs[cabac7d3-855f-45d8-81b8-c31fcaa8bd3d].Name" class="field-validation-valid"></span>
</td>
Anyone know if this is the problem? And if so, how can I work around it to be able to easily update the collections with UpdateModel? (While still being able to add or remove items in the View before POST, which was the purpose of this technique to begin with).
It looks like there is a Parent entity that has a one to many relationship with your Consultant entity. When you change an attribute of the Consultant entity that is used as the ForeignKey for that relationship, Entity Framework sets the relevant field in the Parent entity to null to decouple the relationship. When that field is not nullable you'll get this error. Actually that error definition is surprisingly good, I've seen this problem with far more cryptic errors.
So, I recommend that you check the parent entity in the database, and proceed to a remedy from there (if you can change it to nullable all is well, if it is part of a different constraint -pk or suchlike- you'll have to fiddle with your object models). I'd ask you to post your entity models, but the chunk of text is intimidating as it is.
I think the error you are getting is related to: EF 4: Removing child object from collection does not delete it - why? You have created an orphan somewhere.
Yes it is related to HtmlPrefixScopeExtensions, but only because you are using Mvc Futures model binders.
In global.asax.cs comment out the line
Microsoft.Web.Mvc.ModelBinding.ModelBinderConfig.Initialize();
and retry: it will work ok !
The problem happens because the MVC futures model binder does not handle correctly this case. It converts ok the form data into your model when you submit the form, but it has a problem when filling the ModelState object when you use HtmlPrefixScopeExtensions to generate non incremental ids.
The model itself is correctly created from the form data. The problem lies inside ModelState which contains only the last value of the collection instead of all elements of the collection.
The strongly typed helper method - which renders the list - only select items which are in your Model property list AND in the matching ModelState entry which is converted into a list. So because there is only one item in the matching ModelState entry other list items get deselected.
This method called by the strongly typed helper code:
htmlHelper.GetModelStateValue(fullName, typeof(string[]))
returns only the last element of the list, because ModelState["Programs[cabac7d3-855f-45d8-81b8-c31fcaa8bd3d].List"].Value contains only the last element of the list.
This is a bug (or non supported scenario) in MVC3 Futures extensible model binders.

linq-to-sql "an attempt has been made to attach or add an entity that is not new"?

I've been getting several errors:
cannot add an entity with a key that is already in use
An attempt has been made to attach or add an entity that is not new, perhaps having been loaded from another datacontext
In case 1, this stems from trying to set the key for an entity versus the entity. In case 2, I'm not attaching an entity but I am doing this:
MyParent.Child = EntityFromOtherDataContext;
I've been using using the pattern of wrap everything with a using datacontext. In my case, I am using this in a web forms scenario, and obviously moving the datacontext object to a class wide member variables solves this.
My questions are thus 2 fold:
How can I get rid of these errors and not have to structure my program in an odd way or pass the datacontext around while keeping the local-wrap pattern? I assume I could make another hit to the database but that seems very inefficient.
Would most people recommend that moving the datacontext to the class wide scope is desirable for web pages?
Linq to SQL is not adapted to disconnected scenarios. You can copy your entity to a DTO having a similar structure as the entity and then pass it around. Then copy the properties back to an entity when it's time to attach it to a new data context. You can also deserialize/reserialize the entity before attaching to a new data context to have a clean state. The first workaround clearly violates the DRY principle whereas the second is just ugly. If you don't want to use any of these solution the only option left is to retrieve the entity you're about to modify by its PK by hitting the DB. That means an extra query before every update. Or use another ORM if that's an option for you. Entity Framework 4 (included with .NET 4) with self-tracking entities is what I'm using currently on a web forms project and everything is great so far.
DataContext is not thread-safe and should only be used with using at the method level, as you already do. You can consider adding a lock to a static data context but that means no concurrent access to the database. Plus you'll get entities accumulated in memory inside the context that will turn into potential problems.
For those that came after me, I'll provide my own take:
The error "an attempt has been made to add or attach an entity that is not new" stems from this operation:
Child.Parent = ParentEntityFromOtherDataContext
We can reload the object using the current datacontext to avoid the problem in this way:
Child.Parent = dc.Entries.Select(t => t).Where(t => t.ID == parentEntry.ID).SingleOrDefault();
Or one could do this
MySubroutine(DataContext previousDataContext)
{
work...
}
Or in a web forms scenario, I am leaning to making the DataContext a class member such as this:
DataContext _dc = new DataContext();
Yes, the datacontext is suppose to represent a unit of work. But, it is a light-weight object and in a web forms scenario where a page is fairly transient, the pattern can be changed from the (using dc = new dc()) to simply using the member variable _dc. I am leaning to this last solution because it will hit the database less and require less code.
But, are there gotchas to even this solution? I'm thinking along the lines of some stale data being cached.
What I usually do is this
public abstract class BaseRepository : IDisposable
{
public BaseRepository():
this(new MyDataContext( ConfigurationManager.ConnectionStrings["myConnection"].ConnectionString))
{
}
public BaseRepository(MyDataContext dataContext)
{
this.DataContext = dataContext;
}
public MyDataContext DataContext {get; set;}
public void Dispose()
{
this.DataContext.Dispose();
}
}
Then imagine I have the following repository
public class EmployeeRepository : BaseRepository
{
public EmployeeRepository():base()
{
}
public EmployeeRepository(MyDataContext dataContext):base(dataContext)
{
}
public Employee SelectById(Guid id)
{
return this.DataContext.Employees.FirstOrDefault(e=>e.Id==id);
}
public void Update(Employee employee)
{
Employee original = this.Select(employee.Id);
if(original!=null)
{
original.Name = employee.Name;
//others
this.DataContext.SubmitChanges();
}
}
}
And in my controllers (I am using asp.net mvc)
public ActionResult Update(Employee employee)
{
using(EmployeeRepository employeeRepository = new EmployeeRepository())
{
if(ModelState.IsValid)
{
employeeRepository.Update(employee);
}
}
//other treatment
}
So the datacontext is properly disposed and I can use it across the same instance of my employee repository
Now imagine that for a specific action I want the employee's company to be loaded (in order to be displyed in my view later), I can do this:
public ActionResult Select(Guid id)
{
using(EmployeeRepository employeeRepository = new EmployeeRepository())
{
//Specifying special load options for this specific action:
DataLoadOptions options = new DataLaodOptions();
options.LoadWith<Employee>(e=>e.Company);
employeeRepository.DataContext.LoadOptions = options;
return View(employeeRepository.SelectById(id));
}
}

How do I create a shallow copy of an object so that it may be serialize and sent via a web method call?

I would like to serialize the properties of the HttpBrowserCapibilities object so that it may be returned via a web method call. Currently the object cannot be serialized:
Cannot serialize member System.Web.Configuration.HttpCapabilitiesBase.Capabilities of type System.Collections.IDictionary, because it implements IDictionary.
...which is understandable. However, I would like to simply copy out the properties and their values to a hierarchy, i.e.
<HttpBrowserCapabilities>
<IsMobile>true</IsMobile>
</HttpBrowserCapabilities>
I'm starting to think I would need to use reflection to copy this object, but I haven't reached a conclusion. Does anyone have any suggestions to keep this simple?
Thanks,
George
Originally I posted an answer using XmlDocument, but I glossed over some of the web method stuff and didn't realize you were really trying to map a DTO.
Reflection sounds complicated but it really isn't. The following snippet will do what you want:
public static void Populate(object dest, IDictionary dictionary)
{
Type t = dest.GetType();
foreach (object key in dictionary)
{
PropertyInfo prop = t.GetProperty(key.ToString(),
BindingFlags.Instance | BindingFlags.Public);
if ((prop != null) && prop.CanWrite)
{
object value = dictionary[key];
prop.SetValue(dest, value, null);
}
}
}
Then invoke this as:
BrowserCapsDto dto = new BrowserCapsDto();
Populate(dto, Capabilities); // Capabilities is the real BrowserCaps
It's pretty easy because you already have an IDictionary and thus you already know all of the possible names you can map; you don't actually need to use any reflection on the source, just the destination.

strongly typed sessions in asp.net

Pardon me if this question has already been asked. HttpContext.Current.Session["key"] returns an object and we would have to cast it to that particular Type before we could use it. I was looking at various implementations of typed sessions
http://www.codeproject.com/KB/aspnet/typedsessionstate.aspx
http://weblogs.asp.net/cstewart/archive/2008/01/09/strongly-typed-session-in-asp-net.aspx
http://geekswithblogs.net/dlussier/archive/2007/12/24/117961.aspx
and I felt that we needed to add some more code (correct me if I was wrong) to the SessionManager if we wanted to add a new Type of object into session, either as a method or as a separate wrapper. I thought we could use generics
public static class SessionManager<T> where T:class
{
public void SetSession(string key,object objToStore)
{
HttpContext.Current.Session[key] = objToStore;
}
public T GetSession(string key)
{
return HttpContext.Current.Session[key] as T;
}
}
Is there any inherent advantage in
using
SessionManager<ClassType>.GetSession("sessionString")
than using
HttpContext.Current.Session["sessionString"] as ClassType
I was also thinking it would be nice
to have something like
SessionManager["sessionString"] = objToStoreInSession,
but found that a static class cannot have an indexer. Is there any other way to achieve this ?
My thought was create a SessionObject which would store the Type and the object, then add this object to Session (using a SessionManager), with the key. When retrieving, cast all objects to SessionObject ,get the type (say t) and the Object (say obj) and cast obj as t and return it.
public class SessionObject { public Type type {get;set;} public Object obj{get;set;} }
this would not work as well (as the return signature would be the same, but the return types will be different).
Is there any other elegant way of saving/retrieving objects in session in a more type safe way
For a very clean, maintainable, and slick way of dealing with Session, look at this post. You'll be surprised how simple it can be.
A downside of the technique is that consuming code needs to be aware of what keys to use for storage and retrieval. This can be error prone, as the key needs to be exactly correct, or else you risk storing in the wrong place, or getting a null value back.
I actually use the strong-typed variation, since I know what I need to have in the session, and can thus set up the wrapping class to suit. I've rather have the extra code in the session class, and not have to worry about the key strings anywhere else.
You can simply use a singleton pattern for your session object. That way you can model your entire session from a single composite structure object. This post refers to what I'm talking about and discusses the Session object as a weakly typed object: http://allthingscs.blogspot.com/2011/03/documenting-software-architectural.html
Actually, if you were looking to type objects, place the type at the method level like:
public T GetValue<T>(string sessionKey)
{
}
Class level is more if you have the same object in session, but session can expand to multiple types. I don't know that I would worry about controlling the session; I would just let it do what it's done for a while, and simply provide a means to extract and save information in a more strongly-typed fashion (at least to the consumer).
Yes, indexes wouldn't work; you could create it as an instance instead, and make it static by:
public class SessionManager
{
private static SessionManager _instance = null;
public static SessionManager Create()
{
if (_instance != null)
return _instance;
//Should use a lock when creating the instance
//create object for _instance
return _instance;
}
public object this[string key] { get { .. } }
}
And so this is the static factory implementation, but it also maintains a single point of contact via a static reference to the session manager class internally. Each method in sessionmanager could wrap the existing ASP.NET session, or use your own internal storage.
I posted a solution on the StackOverflow question is it a good idea to create an enum for the key names of session values?
I think it is really slick and contains very little code to make it happen. It needs .NET 4.5 to be the slickest, but is still possible with older versions.
It allows:
int myInt = SessionVars.MyInt;
SessionVars.MyInt = 3;
to work exactly like:
int myInt = (int)Session["MyInt"];
Session["MyInt"] = 3;

Resources