How to keep an AutoIncrement/Identity value without a database? - asp.net

I've received a rush project (asp.net c# framework v2), of course due on Monday morning. It's a very simple project -- add a "Request Quote" page to an existing site. Basically, collect some info and then email someone at the company the contents of the form, and show the user a "Thank You" page. Simple as pie...until I just read the last requirement.
Every form submission is supposed to have a "Confirmation Number", starting with 10000, and each one successively increments the number by one.
Sounds so easy right? Well, I don't have a database in this site. No idea why, I really don't know anything else about the site other than the info I need to fill the requirements.
So, with this in mind, and realizing that this is less than optimal, I guess my only solution is to make a text file (yuck) and read it in, get the last number used, add one, and write that back to the text file. Which of course leaves a lot to be desired in respect to "locking" of the file so I don't get duplicate numbers.
Anyone have any suggestions for me? I'm open to anything at this point...

First, I'm assuming each process has some sort of unique identifier available. This could be a process ID, or something that is guaranteed to be unique without race conditions. Write this value to a file with the same name, so that you don't have a race creating that file. Then, move it to a file called "lock" if it doesn't exist. Check that this happened successfully by looking at the contents. Now read the value from the Confirmation number file. Check that the "lock" file has your unique identifier in it -- this ensures there wasn't a race from two people trying to move their file to lock. If it is yours, write back the incremented number and your unique ID to the Confirmation number file. Check again you hold the lock, and that the Confirmation number file has your ID. Then remove the lock file.
Should this fail, just sit around waiting for the lock file to disappear, then try again.
This should allow you write to a file in a race-free manner, and ensures that the sequence number always increases by one.

You could keep track of the confirmation number in the Windows Registry, but given the option it's probably easier to use a simple text file as you suggest. Make sure you clean up (i.e. Dispose) the file handles each time to release any file locks.

I would think of XML as a good solution.
This post might make xml even easier for you.

Related

Can you get a log file of 'reads' on specific RECID(Tablename) in Progress-4GL/Openedge at RunTime without access to Source Code?

I want to know which tables are being read by a query.
for each Customer where CustomerID = 12345.
Eventually this customer will be found in the following example, but progress must 'read' many tables before getting to customer 12345.
How do I know exactly which tables are read (By CustomerID), prior to getting to customer 12345?
*NOTE: I do not have access to modify the code being run for this selection. Ideally I would run a separate set of code that is executed at the same time as the customer query above to track the reads.
EDIT: More clearly - Can you track reads from a given program (.p) OR ProcessID and output either a RECID or the PrimaryKey to a file?
I understand the information is being read off the Disk and probably stored in a database buffer. So how would I get at the information in the database buffer?
You seem to be mixing up a few different things.
In a situation like your example where you FIND a specific record in one, and only one table then there is just a single record read. Progress will find that record by first scanning a relevant index. That might be 2 or 3 "logical reads" of the b-tree to get to the proper node. The record block and index blocks may, or may not be read from disk - that depends on what has happened previously.
There are "Virtual System Tables" available that can tell you how many READ operations take place against a particular table or index. But they do not trace the specific ROWID or other identifying data. _TableStat and _IndexStat are aggregates for all users on the system, _UserTableStat and _UserIndexStat are specific to a particular user's activity. You do need to set the -tablerangesize and -indexrangesize parameters adequately to take advantage of these.
If you have enabled the table and index statistics then you can use a tool like ProTop - http://protop.wss.com to get insight into this activity. Or you can write your own code.
OpenEdge Auditing does not track reads. That would be prohibitively expensive.
It's probably not really a good idea but, in theory, you could write FIND triggers for the tables you are interested in. That doesn't require access to the application source but you would need a development license. It will probably kill performance to do this though - so unless this is a non-production test environment that you just want to fiddle with I wouldn't really do that.
You mention wanting to know how you got to that point. That sounds more like you might need to have a "4gl trace". One easy way to get the stack trace of a running process is to execute:
$DLC/bin/proGetStack PID (UNIX)
or
%DLC%\bin\proGetStack PID (Windows)
This command will generate a "protrace.pid" file containing a 4gl stack trace and other interesting information.
There are also more complicated ways to get that info like using PROMON and the "client statement cache" or setting various log entry types at session startup. But proGetStack is pretty convenient and requires no code or scripting changes.
Some great options from Tom above. And all of them may be relevant to you. The option he only skirts around is the logging options. I feel obliged to expand on this because I'm giving a talk on it in a couple of weeks!
Assuming you are running a modern version of Progress, or even 10.2B08, then you have client logging available to you. Start your session with these additional options:
-clientlog "\somefolder\somefile.txt"
-logentrytypes "QryInfo:3"
This will log all the info of all the queries in your session to the file you specified above. If you navigate to the point in the system where you want to analyse your query and empty the logfile and save it, you can then run the offending query and see all the detail you need.
The output tells you all sorts of useful info, including the number of reads on each table, compared with the number returned to the user. You also get the index selected.
Using Tom's advice and/or this will get you what you need.

xProc - Pausing a pipeline and continue it when certain event occurs

I'm fairly new to xProc and xPath, but I've been asked to solve the following problem:
Step 2 receives data via the secondary port from step 1. Step 2 contains a p:for-each, which saves a document into a folder for each element that passes the for-each.
(Part A)
These documents (let's say I receive 6 documents from for-each) lay in the same directory and get filtered by p:directory-list and are eventually stored in one single document, containing the whole path of every document the for-each created. (Part B)
So far, so good.
The problem is that Part A seems to be too slow. Part B already tries to read the data Step A
stores while the directory is still empty. Meaning, I'm having a performance / synchronization problem.
And now comes the question:
Is it possible to let the pipeline wait and to let it continue as soon as a certain event occurs?
That's what I'm imagining:
Step B waits as long as necessary until the directory, which Step A stores the data in, is no longer empty. I read something about
dbxml:breakpoint, but unfortunately I couldn't find more information than the name and
a short description of what it seems to do:
Set a breakpoint, optionally based upon a condition, that will cause pipeline operation to pause at the breakpoint, possibly requiring user intervention to continue and/or issuing a message.
It would be awesome if you know more about it and could give an example of how it's used. It would also help if you know a workaround or another way to solve this problem.
UPDATE:
After searching google for half an eternity, I found SMIL which's timesheets seem to do the trick. Has anyone experience with throwing XML / xProc and SMIL together?
Back towards the end of 2009 I proposed the concept of 'Orchestrating XProc with SMIL' http://broadcast.oreilly.com/2009/09/xproc-and-smil-orchestrating-p.html in a blog post on the O'Reilly Network.
However, I'm not sure that this (XProc + Time) is the solution to your problem. It's not entirely clear, to me, from you description what's happening. Are you implying that you're trying to write something to disk and then read it in a subsequent step? You need to keep stuff in the pipeline in order to ensure you can connect outputs to subsequent inputs.

Using VB.NET to Detect Changes in a Web Page

Again I come to you guys for your expertise and advice on an issue that I am having. I was wondering if any of you would know how to detect if a web page has been modified using VB.NET. I need to be able to set up a task which periodically (like once a week) scans the user inputted web pages and if the web page content has changed, I need to fire off an email to an individual that it has changed (not the exact location on the page itself). I'll be storing the HTTP status and of course the page data itself as well as the date of when it was last modified. Of course this needs to be very fault tolerant since it could be another week before the check runs again. Any help would be great. Thank you.
EDIT
New twist on this question sorry. I had more time to think about what we wanted. So... Detecting ANY change on a web page would be kind of silly since time dependent elements of the page would change every so often. Instead, what I would like to do is be able to detect the documents in the page. For instance if there are excel, word docs, or pdfs that get changed on that page. So, I'd run the hash on these documents then on some sort of schedule do a check to see if new documents have been added or if the old documents have been modified. Any suggestions on how to detect the documents embedded on the page and running the hash? Thanks again!
As I mentioned in a comment, this sort of job is what checksums (also known as hash functions) were designed for.
You code for will look something like this:
- for each webpage of interest
- pull webbpage
- calculate checksum of contents
- is current checksum different to last checksum?
- if yes, send email
- store new checksum and other appropriate data
The .Net framework has a number of checksums available. The two most popular are MD5 and sha1
In addition to the checksum option, there are also various Diff function that achieve this, and provide much more information than changed=true/false. This question has more info:
How to tell when a web page has changed by x% in VB.net?

I have multiple users, can i lock the web page so that only one user at a time can update a record?

Can anyone help or provide me with some suggestions for the below query.
I have a web form (Minutes of Meeting) and 8 users that need to access this web page and update their area. A user may have more than one area to update and essentially i would like to some how lock down the web page if possible when a user is using it so that no other user can update this web page till joe bloggs has finished with it.
I have a Active Directory security group set up to restrict the site to that group of users only, but i need to think of a solution to the above?
Is there a way i can do this via a web control or via SQL?
There must be better ways to do it. However, Is it possible for you to introduce a sql table column similar to "UpdateInProgress" (bit). Any update process sees that column, If 0 then It updates to 1 and after It saves the changes and updates back to 0 so that the form is available for other to update. If update process sees 1, It can't update the web form because update is in progress.
I also suggest to introduce another column named "UpdateInProgressBy" to check who has opened it for editing.
First of all we must note that there is a big time from the moment the user reads the data, get it in a page, change them and then try to write them back. So we are not talking for the lock command on SQL, nether any other lock that happens in milliseconds and help to synchronize threads, but here we must synchronize people and what they write.
There is also a problem if the user leave the page for any reason and this can make the data lock for ever.
This problem can solve with two approaches.
the easy one, when a user try to save data you must check if the same data have been change in the middle, and warn him, or show a merge dialog, or merge programmaticall, or something similar - I do not know what you won.
the difficult way is to constantly monitor the page that read and change the data, and keep this monitor results on a common table in the data base, and there if a user have been and stay on page, the rest users get a warning and read only data, until the user go.
This monitor must be made with javasript and must know even if a user abandon the page.
SET TRANSACTION ISOLATION LEVEL as SERIALIZABLE
for more information check this link:
http://msdn.microsoft.com/en-us/library/ms173763.aspx

Preventing Users from Working on the Same Row

I have a web application at work that is similar to a ticket working system. Some users enter new issues. Other workers choose and resolve issues. All of the data is maintained in MS SQL server 2005.
The users working to resolve issues go to a page where they can view open issues. Because up to twenty people can be looking at this page at the same time, one potential problem I had to address was what happens if someone picks an issue that someone else picked just after their page loaded.
To address this, I did two things. First, the gridview displaying the issues to select uses an AJAX timer to update every second. Once an issue has been selected, it disappears one second later at most. In case they select one within this second, they get a message asking them to choose another.
The problem is that the AJAX part of this is sending too many updates (this is what I am assuming) and it is affecting the performance of the page and database. In addition, the updates are not performing every second. I find the timer to be unreliable when working to trigger stored procedures.
There has to be a better way, but I can't seem to find one. Does anyone have experience with a situation like this or have suggestions to keep multiple users from selecting the same record to maintain? I really do not want to disable the AJAX part entirely because I feel the message alone would make the application frustrating to use.
Thanks,
Put a lock timestamp field on the row in the database. Write a stored proc that returns true or false if the expiration timsetamp is older than a specific time. Set your sessions on your web app to expire in the same time, a minute or two. When a user select a row they hit the stored proc which helps the app to decide if it should let the user to modify it.
Hope that makes sense....
Two things can help mitigate your problem.
First, after-selection notification that the case has been taken is needed regardless of your ajax update time frame. Even checking every second doesn't mean two people cannot click the same case at what they perceive to be the same time. In such cases, one of the users needs to be notified that their selection is invalid even though it appeared valid when selected. This notification doesn't need to be elaborate; keeping a light, helpful tone can improve user perception even in the light of disappointment. And if you identify the user who selected that record already, that will not only help your users coordinate in future but also divert attention from your program to the user who snaked the juicy case. (indeed, management may like giving your users the occasional collision as it will motivate them to select cases faster)
Second, a small tweak to how you display your cases can reduce selection collisions. Adding a random element to display order and/or filtering out every other case on display will help your users select different cases naturally. Human pattern recognition and task selection isn't really random so small changes to presentation can equal big changes to selection behavior. Reductions in collision chance keeps your collision notifications rare (and thus less frustrating to your users). This is even better if your users can be separated into classifications that can help determine useful case ordering/filtering.
Okay, a third thing that will help you over time is if you keep a log of when collisions occur (with helpful meta data about the collision—like who was involved and selection timing). Armed with solid collision data, you can find what works and what doesn't. Over time, you can hone your application to your actual use cases as well as identify potential problems early. Nothing reassures your users more than being on top of a problem (and able to explain your plans to solve it) before they're even aware it exists.
With these mitigating patterns, you'll probably find you can safely reduce your ajax query timeframe without affecting user experience. And with useful logging, you'll have the assurance that any tweaks you put in place are actually working (or not—which is maybe even more useful to know).
I did something similar where once a user opened a ticket (row) it assigned that ticket to that user and set a value on that record, like and FK to that particular user, so if anyone else tried to open that ticket (row) it would let them know it has already been assigned to someone else.
If possible limit the system so that they just get the next open issue off the work queue as opposed having them be able choose from all open issues.
If that isn't possible, I suppose you could check upon the choosing of an issue to see if it is still available. If it's not available, then make it disappear after the user clicks on it. This way you are only requesting when they actually click on something as opposed to constant polling of the data.
Have you tried increasing the time between refreshes. I would expect that once per 30 seconds would be sufficient. 40 requests/minute is a lot less load than 1200/minute. Your users may not even notice the difference.
If they do, how about providing a refresh button on the page so the users can manually refresh the list just prior to selecting an item to avoid the annoying message if they choose.
I'm missing to see the issue, specially after you mentioned you are already flagging tickets as in progress/being maintained and have a timestamp/version of the item.
Isn't the following enough:
User browses the tickets and sees a list of available tickets i.e. this excludes ones that are in the db as in progress. If you want the users to also see tickets in progress, you indicate it clearly in the ticket status and disable the option to take it.
User either flags a ticket as in progress explicitly or implicitly by opening the ticket (depends on the user experience / how its presented to the users).
User explicitly moves the ticket to a different status i.e. completed, invalid, awaiting for feedback, etc.
When the items are retrieved at 1, you include a timestamp/version. When 2 happens, you use a optimistic concurrency approach to make sure that if 2 persons try to update the take the ticket at the same time only the first one will be successful.
What will happen is that for the second person, the update ... where ... timestamp = #timestamp will not find any records to update and you will report back that the ticket was already taken.
If you want, you can build on top of the above to update the UI as tickets are grabbed. This could be by just doing a full refresh of the current page of tickets after x time (maybe alerting/prompting the user), or even by retrieving a list of tickets changed for the page of tickets being showed with ajax. You still have the earlier steps in place, as this modification its just a convenience for the users.

Resources