Best way to persist data locally for a user on a webpage? - asp.net

I have a search box on the page (webservice-fed results) and I'd like to save the search TERMS for the user in a UL\LI list on the page. So the next time they come back to the page, the results are still there....but if they clear their cache then it gets reset.
What's the best way to go about that?...I can persist between postbacks easily, but this is a new one for me.
Thanks,
Steve

Probably use a cookie, but remember you're limited to 4kb of data.
Otherwise, store the session in a database. Then save that records ID to a cookie. That way you can load the data from the database based on the ID in the cookie. Then just flush any entries in DB older than say, 30 days or something.

Due to lack of sleep, I have given you an answer in PHP. Sorry about that, I'll leave it because the information is still correct, just the syntax will be slightly different in asp.net
You have two options for data-persistence in php; cookies and sessions.
Sessions are server-side, and last as long as the browser window stays open.
Cookies are client-side, and last until the user clears their cache.
So it sounds like you want a cookie option. So in your search query processor, add the line
setcookie('search_' . time(), $_POST['search_query'], (time() + 10368000));
This will create a cookie on the client machine, with the name search_xxxx where xxxx is a timestamp (each cookie has to have a unique name otherwise they will overwrite eachother).
The weird looking calculation at the end is an expire time, which is set to 120 days in the future.
Then in your php document that displays your search page, you need to spit out all these cookie values.
foreach($_COOKIES as $k => $v) {
if(substr($k, 0, 7) == 'search_') echo($v . '<br />');
}
This will spit out each of the search terms found on the clients machine. The if statement is to make sure it only displays search term cookies, and no others.

Use a cookie. Assuming you start with a List<string> of search terms called terms, do:
var sb = new StringBuilder();
foreach (var t in terms) sb.Append(t).Append(";")
var c = new HttpCookie("terms");
c.Value = sb.ToString().TrimEnd(';');
c.Expires = DateTime.Now.AddDays(30);
Response.Cookies.Add(aCookie);
Then when you need to access those terms again (to databind to a Repeater, or process in some other way for display on your page):
if (Request.Cookies["terms"] != null) {
var terms = new List<string>();
foreach (var t in Request.Cookies["terms"].Value.Split(';')) list.Add(t);
}

A Cookie is probably your solution for today, but HTML5 localStorage will eventually be the best bet. Only supported by modern browser versions right now, depends on your users.

Related

Paypal Processing - Need to grab TransactionId, CorrelationId and TimeStamp

Current Project:
ASP.NET 4.5.2
MVC 5
PayPal API
I am using this example to build myself a PayPal transaction (and yes, my code is virtually identical), as I do not know of any other method that will return the three values in the title.
My main problem is that, the example I am utilizing is much more concise and compact than the one I used for a much older Web Forms application, and as such, I am unsure as to where or even how to grab the three values I need.
My initial thought was to do so right after the ACK, and indeed I was able to obtain the CorrelationId as well as the TimeStamp, but because this was prior to the user being carted off to PayPal’s site (sandbox in this case -- see the return new PayPalRedirect contained within the if), the TransactionId was blank. And in this example, PayPal explicitly redirects the user to a Success page without returning to the Action that sent the user to PayPal in the first place, and I am not seeing any GET values in the URL at all aside from the Token and the PayerId, much less ones that could provide me with the TransactionId.
Suggestions?
I have also looked at the following examples:
For ASP.NET Core, was unsure how to adapt to my current project particularly due to appsettings.json, but it looked quite well done. I really liked how the values were rolled up in lists.
For MVC 4, but I couldn’t find where ACK was being used to determine success or successwithwarning so I couldn’t hook into that.
I have also found the PayPal content to be like trying to drink from a fire hose at full blast -- not only was the content was hopelessly outdated (Web Forms code, FTW!) but there was also so many different examples it would have taken me days to determine which one was most appropriate to use.
Any assistance would be greatly appreciated.
Edit: my initial attempt at modifying the linked code has this portion:
values = Submit(values);
var ack = values["ACK"].ToLower();
if(ack == "success" || ack == "successwithwarning") {
using(_db = new ApplicationDbContext()) {
var updateOrder = await _db.Orders.FirstOrDefaultAsync(x => x.OrderId == order.OrderId);
if(updateOrder != null) {
updateOrder.OrderProcessed = false;
updateOrder.PayPalCorrelationId = values["CORRELATIONID"];
updateOrder.PayPalTransactionId = values["TRANSACTIONID"];
updateOrder.PayPalTimeStamp = values["TIMESTAMP"];
updateOrder.IPAddress = HttpContext.Current.Request.UserHostAddress;
_db.Entry(updateOrder).State = EntityState.Modified;
await _db.SaveChangesAsync();
}
}
return new PayPalRedirect {
Token = values["TOKEN"],
Url = $"https://{PayPalSettings.CgiDomain}/cgi-bin/webscr?cmd=_express-checkout&token={values["TOKEN"]}"
};
}
Everything within and including the using() is my added content. As I mentioned, the CorrelationId and the TimeStamp come through just fine, but I have yet to successfully obtain the TransactionId.
Edit 2:
More problems -- the transactions that are “successful” through the sandbox site (the ReturnUrl is getting called) aren’t reflecting properly on my Facilitator and Buyer accounts, even when I do payments straight from the buyer’s PayPal account (not using the Credit Card). I know I am supposed to see transactions in the Buyer’s account, either through the overall Dev account (Accounts -> Profile -> balance or Accounts -> Notifications) or through the Buyer’s account in the sandbox front end. And yet -- multiple transactions returning me to the ReturnUrl path, and yet no transactions in either.
Edit 3:
Okay, this is really, really weird. I have gone over all settings with a fine-toothed comb, and intentionally introduced errors to see where things should crap out. It turns out that the entire process goes swimmingly - except nothing shows up in my notifications and no amounts get moved between my different accounts (Facilitator and Buyer). It’s like all my transactions are going into /dev/null, yet the process is successful.
Edit 4: A hint!
In the sandbox, where Buyer accepts the transaction, there is a small note, “You will be able to review the transaction before completing it” or something like that -- suggesting that an additional page is not coming up and that the user is being uncerimoniously dumped back to the success page. Why the success page? No clue. But it’s happening.
It sounds like you are only doing the first part of the process.
Express Checkout consists of 3 API calls:
SetExpressCheckout
GetExpressCheckoutDetails
DoExpressCheckoutPayment
SEC generates a token, and then you redirect to PayPal where the user signs in and reviews the transactions before agreeing to pay.
They are then sent to the ReturnURL included in your SEC request, and this is where you'll call GECD in order to obtain all the buyer details that are now available since they signed in.
Using that data you can complete the final DECP request, which is what finalizes the procedure. No money is actually processed until this final call is completed successfully.

asp.net multiple users?

I have an asp.net c# web application where a user can log in and see his schedule for the last week, which is stored in a remote database. Logging into the site consists of just an SQL check to see if the username and password matches the uname & pass records in the database. Once logged in, they can manipulate the time entries for the schedule, etc. While debugging, I logged in as two different users. User B had all of user A's stuff on his form. I wrote the program like I was writing a regular c# app, and didn't really give any thought to multiple people using the website at the same time. I guess I thought that instance handling would be automatic? I've only been working with asp.net for a week or so, and don't have much support.
My main question is, if I have multiple users on my site at the same time, how do I keep their sessions separate?
update - after adding Session variables
This is my sql statement to get user information. Now, using session variables:
string sqlquery = "SELECT FirstName, LastName, OperatorID FROM operators WHERE EmpID = '" + sql + "'";
using (MySqlConnection conn = new MySqlConnection(ConfigurationManager.ConnectionStrings["EobrConnectionString"].ConnectionString))
{
conn.Open();
using (MySqlCommand comm = new MySqlCommand(sqlquery, conn))
{
using (MySqlDataAdapter adapter = new MySqlDataAdapter(comm))
{
DataTable dt = new DataTable();
adapter.Fill(dt);
foreach (DataRow dr in dt.Rows)
{
Session["Fname"] = dr[1].ToString();
Session["Lname"] = dr[2].ToString();
Session["opID"] = dr[3].ToString();
}
}
}
}
In the main menu page I have this:
protected void Page_Load(object sender, EventArgs e)
{
firstname = Session["Fname"].ToString();
lastname = Session["Lname"].ToString();
lblwelcome.Text = "Welcome, " + firstname + " " + lastname + ", make your selection below.";
}
When User A logs in, they see their name "Frank Drebbin". When User B logs in they see their name, "Jake Gaston". But now, if I reload the first users page, they see the name as "Jake Gaston".
You need to use seperate browsers. You're using sessions to identify users, and sessions are matched to a cookie in your browser From your comments above, here's what happens
log in as steve -> session created -> steve stored as username -> cookie returned to browser
in a new window, log in as frank -> session already exists, rename username to frank.
in 1st window refresh. -> session username no frank -> return data based on that username.
New IE windows all share the same context. this means if 1 window gets a cookie, they all do. There are 2 ways around this (actually 3).
Use physically different machines
Use 2 broswers, eg IE & Firefox
In IE press alt to get the menu, then click new session from the file menu. This makes a physically separate window that won't share cookies (although there's nothing to indicate this to you)
Having done the above, try again logging in as Steve & Frank & you should see they don't interfere with each other.
I would guess the problem is that, while the sessions are kept separately (you are storing things in Session, right?), the queries that load data from the database are not taking the username into account.
Have a look at the SQL query that loads tasks and see if you include the username (or a user ID) in that query. Feel free to post the query if you need help looking at it.
So to directly answer your main question, if you store things in Session, they will be properly isolated from other sessions. However, wrong data in, wrong data out.
EDIT:
Use another browser to test this, you can't use the same browser in another tab, or the same browser in another window, it almost will be the same session
If you are using "Session" to save user's data then they are separated, that means you should store any per-user data into "Session" or cookies.
NOTE:
You should maintain concurrency http://msdn.microsoft.com/en-us/library/cs6hb8k4(v=vs.80).aspx
The solution is very simple( USER DIFFERENT BROWSERS SIMONTANIOUSLY) like IE, Firefox, Chrome etc........
You are not using transact SQL so need to bother about concurrency,
but when u login the current browser create a session if you use the same browser it will still has that session,
If you use another browser the browser will have the session but the value wont be the same.
e.g.
Each mobilephones have different browsers now if you have 100 mobile phones that would mean you can store 100 same USER session but all will have different values varies on the data you providing or operation handles.
Hope this help you understand.

Create a timed cache in Drupal

I am looking for more detailed information on how I can get the following caching behavior in Drupal 7.
I want a block that renders information I'm retrieving from an external service. As the block is rendered for many users I do not want to continually request data from that service, but instead cache the result. However, this data is relatively frequent to change, so I'd like to retrieve the latest data every 5 or 10 minutes, then cache it again.
Does anyone know how to achieve such caching behavior without writing too much of the code oneself? I also haven't found much in terms of good documentation on how to use caching in Drupal (7), so any pointers on that are appreciated as well.
Keep in mind that cache_get() does not actually check if an item is expired or not. So you need to use:
if (($cache = cache_get('your_cache_key')) && $cache->expire >= REQUEST_TIME) {
return $cache->data;
}
Also make sure to use the REQUEST_TIME constant rather than time() in D7.
The functions cache_set() and cache_get() are what you are looking for. cache_set() has an expire argument.
You can use them basically like this:
<?php
if ($cached_data = cache_get('your_cache_key')) {
// Return from cache.
return $cached_data->data;
}
// No or outdated cache entry, refresh data.
$data = _your_module_get_data_from_external_service();
// Save data in cache with 5min expiration time.
cache_set('your_cache_key', $data, 'cache', time() + 60 * 5);
return $data;
?>
Note: You can also use a different cache bin (see documentation links) but you need to create a corresponding cache table yourself as part of your schema.
I think this should be $cache->expire, not expires. I didn't have luck with this example if I'm setting REQUEST_TIME + 300 in cache_set() since $cache->expires will always be less than REQUEST_TIME. This works for me:
if (($cache = cache_get('your_cache_key', 'cache')) && (REQUEST_TIME < $cache->expire)) {
return $cache->data;
}

WordPress Write Cache Issue with Multiple Sessions

I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again.
However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass.
The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal.
But what it's doing is letting those other sessions in.
To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule.
I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.
The key may be to create a semaphore record in the database for the "drip" event.
Warning - consider the following pseudocode - I'm not looking up the functions.
When the post is queried, use a SQL statement like
$ts = get_time_now(); // or whatever the function is
$sid = session_id();
INSERT INTO table (postcategory, timestamp, sessionid)
VALUES ("$category", $ts, "$sid")
WHERE NOT EXISTS (SELECT 1 FROM table WHERE postcategory = "$category"
AND timestamp < $ts - 24 hours)
Database integrity will make this atomic so only one record can be inserted.
and the insertion will only take place if the timespan has been exceeded.
Then immediately check to see if the current session_id() and timestamp are yours. If they are, drip.
SELECT sessionid FROM table
WHERE postcategory = "$postcategory"
AND timestamp = $ts
AND sessionid = "$sid"
The problem goes like this with page requests even from the same session (same visitor), but also can occur with page requests from separate visitors. It works like this:
If you are doing content dripping, then a page request is probably what you intercept with add_action('wp','myPageRequest'). From there, if a scheduled post is due, then you create the new post.
The post takes a little bit of time to write to the database. In that time, a query on get_posts() may not see that new record yet. It may actually trigger your piece of code to create a new post when one has already been placed.
The fix is to force WordPress to flush the write cache appears to be this:
try {
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
# delete_post_meta($asPost['ID'], '_thwart');
# add_post_meta($asPost['ID'], '_thwart', '' . date('Y-m-d H:i:s'));
} catch (Exception $e) {}
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
$sLastPostDate = '';
# $sLastPostDate = $asPost['post_date'];
$sLastPostDate = substr($sLastPostDate, 0, strpos($sLastPostDate, ' '));
$sNow = date('Y-m-d H:i:s');
$sNow = substr($sNow, 0, strpos($sNow, ' '));
if ($sLastPostDate != $sNow) {
// No post today, so go ahead and post your new blog post.
// Place that code here.
}
The first thing we do is get the most recent post. But we don't really care if it's not the most recent post or not. All we're getting it for is to get a single Post ID, and then we add a hidden custom field (thus the underscore it begins with) called
_thwart
...as in, thwart the write cache by posting some data to the database that's not too CPU heavy.
Once that is in place, we then also use wp_get_recent_posts(1) yet again so that we can see if the most recent post is not today's date. If not, then we are clear to drip some content in. (Or, if you want to only drip in like every 72 hours, etc., you can change this a little here.)

Programicatlly visit (all) ASP.Net page(s) in a website?

In the Security model for out ASP.Net website (.Net 3.5) we store the page name:
page.GetType().Name
as the primary key in a database table to be able to lookup if a user has access to a certain page. The first time a page is visited this record is created automatically in the database.
We have exported these database statements to insert scripts, but each time a new page gets created we have to update the scripts, not a huge issue, but I would like to find an automated way to do this.
I created an attribute that I tagged a few pages with and then wrote a small process to get all the objects that have this attribute, through the reflection create an instance and insert the record using the same code to for page records mentioned above:
IEnumerable<Type> viewsecurityPages = Assembly.GetExecutingAssembly().GetTypes().Where(t => t.IsDefined(typeof(ViewSecurityAttribute),false));
foreach (Type t in viewsecurityPages)
{
object obj = Activator.CreateInstance(t, false);
//clip..(This code just checks if the record already exists in the DB)
if (feature == null)
{
Attribute attb = Attribute.GetCustomAttribute(t, typeof(ViewSecurityAttribute));
if (attb != null)
{
CreateSecurableFeatureForPage((Page)obj, uow, attb.ToString());
}
}
}
The issue is that page.GetType().Name when the page goes through the actual page cycle process is something like this:
search_accounts_aspx
but when I used the activator method above it returns:
Accounts
So the records don't match the in the security table. Is there anyway to programtically "visit" a webpage so that it goes through the actual page lifecycle and I would get back the correct value from the Name parameter?
Any help/reference will be greatly appreciated.
Interesting problem...
Of course there's a (too obvious?) way to programmatically visit the page... use System.Net.HttpWebRequest. Of course, that requires the URI and not just a handle to the object. This is a "how do we get there from here?" problem.
My suggestions would be to simply create another attribute (or use that same one) which stores the identifier you need. Then it will be the same either way you access it, right?
Alternatively... why not just use a 3rd party web spider/crawler to crawl your site and hit all the pages? There are several free options. Or am I missing something?

Resources