I need to add Languages support to an existing classic asp website
The "best" solution I found is to encapsulate every text in a function,
create a database table where store, for every page, the translations and use a dictionary object to retrieve the right value.
example:
<div>Welcome to xy website</div>
<button class="btn green">Login</button>
becomes
<div><%=TL("Welcome to xy website")%></div>
<button class="btn" ><%=TL("Login")%></button>
then TL function should be like this
Function TL(strInput)
Dim strTargetLanguage, strPageURL,objDict,strTmp1,strTmp2
if strInput<>"" then
' First check if customer has set language.. else uses browser language
if request.cookies("culture")="" then
strTargetLanguage=lcase(left(request.servervariables("HTTP_ACCEPT_LANGUAGE"),2))
else
strTargetLanguage=lcase(left(request.cookies("culture"),2))
end if
' if User's Language is not supported....
if instr(strAcceptedLanguages,strTargetLanguage)= 0 then
strTargetlanguage="en"
end if
strPageURL=Request.ServerVariables("URL")
Set objDict=Server.CreateObject("Scripting.Dictionary")
objDict.Add "strPageUrl",strPageUrl
'Stored Procedure to load translation in the required language and for the target Page
cmd.CommandText="spDictionaryRead"
cmd.CommandType=4
cmd.Parameters("#LanguageID")=strTargetLanguage
cmd.Parameters("#PageUrl")=strPageURL
set rst=cmd.Execute()
if not rst.eof then
while not rst.eof
objDict.Add rst("txt"),rst(strTargetLanguage)
rst.movenext()
wend
end if
rst.close
if objDict.Exists(strInput)=true then
TL=objDict.Item(strInput)
else
' Custom Function to translate using google
TL=Translate(strInput,"en",strTargetLanguage)
TL=Replace(TL,"'","''")
strInput=replace(strInput,"'","''")
'Add new Sentence to Dictionary
cmd.CommandText="spDictionaryWrite"
cmd.CommandType=4
cmd.Parameters("#PageUrl")=strPageURL
cmd.Parameters("#TXT")=strInput
cmd.Parameters("#TargetLanguage")= strTargetLanguage
cmd.Parameters("#TargetText")=TL
cmd.Execute()
set objDict=nothing
end if
else
TL=""
end if
End Function
The function is not ready since at present every time it is called it access the DB and load all the translations of the page and create the Dictionary:
in this situation would be better to avoid the Dictionary and directly Query the DB for the sentence required.
I need "ONLY" to find a wise way to store the dictionary "somewhere" so to avoid to rebuild it
But which to choose? Application, Session, objVariable into the page, ???
googling a little I realize that Application is not a wise solution for many reasons,
Session: I try to keep session very slim: I would never save an object with some 30-50 Keys if I can avoid.... unless I remove it at the end of the page itself (if it worth)?
Someone suggest to load translations into Application as "plain array" and then build Dictionary every time it is required, but while loading the sentences into the Dictionary I can test if current sentence is target sentence and extract the translation without using Dictionary..
therefore neither this is a wise solution
I read also about
Lookup Component from Microsoft
but couldn't find any docs
perhaps can use some .NET components, like HashTable?
Since I imagine that translations are a common issue, I expect there has to be a better solution, and that my approach is wrong:
Can pls suggest a better approach or some hints?
I use Application for caching of some objects in Classic ASP, usually as an array with the values retrieved from the database with GetRows().
Session isn't suitable as it is only available to one user, not all users like Application.
For your situation where you probably want to cache a LOT of data I have a suggestion.
When you retrieve your values from the database you could create an ASP script with the File System Object which contains the VBScript code to create your dictionary and populate with values. Then you could include this generated ASP page in all of your files.
For example, to build your cache file...
<%
datestamp = Year(Now()) & Month(Now()) & Day(Now()) & Hour(Now()) & Minute(Now()) & Second(Now())
set fs=Server.CreateObject("Scripting.FileSystemObject")
set tfile=fs.CreateTextFile(Server.MapPath("\cache\language_" & datestamp))
tfile.WriteLine("<%")
tfile.WriteLine("Set objDict=Server.CreateObject(""Scripting.Dictionary"")")
'...your database code here....
while not rst.eof
tfile.WriteLine("objDict.Add " & rst("txt") & ",rst(strTargetLanguage)")
rst.movenext()
wend
'...etc etc...
tfile.WriteLine("%>")
tfile.close
set tfile=nothing
set fs=nothing
Application("languagecache") = datestamp
%>
NB. The datestamp in the filename is there so that there is no issue when the cache is being built.
Then, in your ASP pages you could include the latest cache file using Server.Execute...
Server.Execute("\cache\language_" & Application("languagecache"))
This is all just an example. You should also add code to ensure that if a page is accessed before the first time the cache file is built that it gets content from an include file that is always going to be there. You would also add in some code to check when the last time the cache file was generated and generate a new one after a set time. You might do this is a scheduled task so that some poor user doesn't have to wait while the cache file is built (or just start it asynchronously).
A while ago I inherited a project from another developer and to date in Classic ASP I haven't found a more efficient way of handling localisation.
The basic premise is
Store key-value pairs in a database, the structure we used is a keys table containing the keys and a "section" (which denotes a particular grouping of keys). We then have our values table which contains language specific localisations associated to the key by a FK (1:M) relationship.
╔══════════════════════════════════════════════════════════════════╗
║ Keys Table ║
╠══════════════╦══════════════╦═════════════════╦══════════════════╣
║ id (int, PK) ║ key (string) ║ section (string)║ editpanel (bit) ║
╚══════════════╩══════════════╩═════════════════╩══════════════════╝
╔═════════════════════════════════════════════════════════════════════╗
║ Values Table ║
╠══════════════╦═════════════════╦═════════════════╦══════════════════╣
║ id (int, PK) ║ key_id (int, FK)║ lang (string) ║ value (string) ║
╚══════════════╩═════════════════╩═════════════════╩══════════════════╝
Build a ASP application to create the XML from the key-value pairs in the database. The application is basically two pages
Iterates through the supported languages and sections (defined in the keys table) in a nested loop, this is how we decide on logical separation of cache files. Within each iteration we pass the work to a another page via a WinHttpRequest object. The page returns an XML structure that is has built by looking in the database and pulling out all the key-value pairs that are related to the specific section being iterated.
As already a mentioned a page specifically built to be called by the WinHttpRequest object that after querying the database for the specific section key-value pairs returns them in a bespoke XML structure.
Store a file structure something like
\packs\ ┐
├ \de\
├ \es\
├ \fr\
... etc
that includes a sub directory for each language supported in the database. The directories need to be accessible by Web Application Identity (whether that's a IUSR or a pre-configured account) with at least Modify permission to allow the creation and modification of the cached XML files.
The bespoke XML files look something like this
<?xml version="1.0" encoding="utf-8" ?>
<language_pack>
<strings>
<string id="391" key="connect" editpanel="0" section="email">
<![CDATA[Connect]]>
</string>
<string id="9" key="uploadimage" editpanel="0" section="common">
<![CDATA[Upload Photo]]>
</string>
<string id="12" key="notes" editpanel="0" section="common">
<![CDATA[Notes]]>
</string>
</strings>
<error_messages>
<error id="1" key="pleasebepatient">
<![CDATA[\nThis action may take a little time!\nPlease be patient.\n]]>
</error>
</error_messages>
<info>
<langcode>gb</langcode>
<langname>English</langname>
<strings>194</strings>
<time>0</time>
</info>
</language_pack>
XML file truncated to keep things simple, actual files contain a lot more values
The main localisation is then powered by an #include file that get's added to each page that needs to support localisation (it just becomes part of the build process when working in localised web applications).
The #include we call locale.asp does something similar to what you describe in the question. It made up of various functions of which the main ones include;
init_langpack(langcode, admin, section) - Handles any initialisation of the XML cache file. The langcode is just the string representation of the language you want to load (collates to the directory names, de, es etc). admin determines whether we are in "Admin Mode" or not and behaves slightly differently if we are, setting page headers etc. and section is the XML file (which we separate into sections) we want to load into an XMLDOM object. It should always be called before attempting to access ml_string() as it loads the XML.
ml_string(id, showcontrol) - Used the ASP pages where we want to output a localisation the id is a key from the XML / Keys table (why both?, will explain in a bit), while the showcontrol Boolean is used to decide when the page is displayed in "Admin Mode" so we can show the localisation in an editable field or not. You might not always want to do this due to how localisations are placed in the page (allowing you to handle them differently, display a panel underneath etc).
Related
I need to create and read a user preferences XML file with Adobe Air. It will contain around 30 nodes.
<id>18981</id>
<firstrun>false</firstrun>
<background>green</background>
<username>stacker</username>
...
What's a good method to do this?
Write up an "XML parser" that reads the values and is aware of the data types to convert to based on the "save preferences model." So basically you write a method/class for writing the data from the "save preferences model" to XML then write a method/class for reading from the XML into the "save preferences model", you can use describeType for both. Describe type will return an XML description of the model classes properties and the types of those properties and accessibility (read/write, readonly, write only). For all properties that are read/write you would store them into the XML output, when reading them back in you would do the same thing except you could use the type property from the describeType output to determine if you need to do a string to boolean conversion (if(boolValue == "true")) and string to number conversions, parseInt or parseFloat. You could ultimately store the XML in a local SQL database if you want to keep history, or else just store the current preferences in flat file (using FileReference, or in AIR you can use FileStream to write directly to a location).
Edit:
Agree with Joshua's comment below local shared objects was the first thing I thought of when seeing this, you can eliminate the need to write the XML parser/reader since it will handle serializing/de-serializing the objects for you (but manually looking at the LSO is probably ugly)... anyhow I had done something similar for another project of mine, I tried stripping out the relevant code, to note in my example here I didn't use describe type but the general concept is the same:
http://shaunhusain.com/OnePageSaverLoader/index.php
I am using file upload mechanism to upload file for an employee and converting it into byte[] and passing it to varBinary(Max) to store into database.
Now I what I have to do is, if any file is already uploaded for employee, simply read it from table and show file name. I have only one column to store a file and which is of type VarBinary.
Is it possible to get all file information from VarBinary field?
Any other way around, please let me know.
If you're not storing the filename, you can't retrieve it.
(Unless the file itself contains its filename in which case you'd need to parse the blob's contents.)
If the name of the file (and any other data about the file that's not part of the file's byte data) needs to be used later, then you need to save that data as well. I'd recommend adding a column for the file name, perhaps one for its type (mime type or something like that for properly sending it back to the client's browser, etc.) and maybe even one for size so you don't have to calculate that on the fly for each file (useful when displaying a grid of files and not wanting to touch the large blob field in the query that populates the grid).
Try to stay away from using the file name for system-internal identity purposes. It's fine for allowing the users to search for a file by name, select it, etc. But when actually making the request to the server to display the file it's better to use a simple integer primary key from the table to actually identify it. (On a side note, it's probably a good idea to put a unique constraint on the file name column.)
If you also need help displaying the file to the user, you'll probably want to take the approach that's tried and true for displaying images from a database. Basically it involves having a resource (generally an .aspx page, but could just as well be an HttpHandler instead) which accepts the file ID as a query string parameter and outputs the file.
This resource would have no UI (remove everything from the .aspx except the Page directive) and would manually manipulate the response headers (this is where you'd set the content type from the file's type), write the byte stream to the client, and end the response. From the client's perspective, something like ~/MyContent/MyFile.aspx?fileID=123 would be the file. (You can suggest a file name to the browser for saving purposes in the response headers, which you'd probably want to do with the file's stored name.)
There's no shortage of quick tutorials (some several years old, it's been around for a while) on how to do this with images. Just remember that there's essentially no difference from the server's perspective if it's an image or any other kind of file. All the server needs to do is send the type in the response headers and write the file's bytes to the client. How the client handles the file is up to the browser. In the vast majority of cases, the browser will know what to do (display an image, display via a plugin a PDF, save a .doc, etc.).
I am going to add file upload control to my ASP.NET 2.0 web page so that users can upload files. Files will be stored in the server in the folder with the name as of the user. I want to know what is the best option to name the files when saving to server. Needs to consider security, performance, flexibility to handle files etc.
Options I am considering now :
Upload with the same name as of the input file name
Add User Id+Random Number +File name as of the input file name
Create random numbers +Current Time in seconds and save files with that number. Will have one table to map this number with users upload
Anything else? What is the best way?
NEVER EVER use user input for filenames. Don't use the username. User the user id instead (I assume your users have an unique id).
NEVER use the original filename. Use your solution number 3, plus the user id instead of the username.
For your information, PHP had a vulnerability a few years ago: one could forge a HTTP POST request with a file upload, and with a file name like "../../anything.php", and the php _FILES array, supposed to contain sanitized values, didn't detect these kind of file names, so one could write files anywhere in the filesystem.
I'd use a combination of
User ID
A random generated string (e.g. a GUID)
Example PDF file name: 23212-dd503cf8-a548-4584-a0a3-39dc8be618df.pdf
This way, the user can upload as many files as he/she wants, without file name conflict, and you are also able to point out which files belong to which users, just by looking at the file names.
I don't see the need to include any other information in the file name, since upload time/date and such can be retrieved from the file's attributes.
Also, you should store the files in a safe location, which external users, such as visitors of your website, cannot access. Instead, you deliver the file to them through a proxy web page (you read the file from the safe location, and pass the data on to the user). For this solution, a database is needed to keep track of files, their location, etc.
This also makes you able to control which users have access to which files through your code.
Update: Here's a description of how the solution with the proxy web page could be implemented.
Create a Web Form with the name GetFile.aspx
GetFile.aspx takes one query parameter named fileid, which is used to identify the file to get. E.g.: http://www.mypage.com/GetFile.aspx?fileid=100
Use the fileid parameter to lookup the file location in the database, so that it can be read and sent to the user. In the Web Form you use Request.QueryString("fileid") to get the file ID and use it in a query that will look something like this (SQL): SELECT FileLocation FROM UserFiles WHERE FileID = 100
Read the file using a System.IO.FileStream and output its contents through Response.Write. Remember to set the appropriate content type using Response.ContentType first, so that the client browser handles the requested file correctly (see this post on asp.forums.net and the MDSN article which is also referred to in the post, which both discuss a method of determining the appropriate content type automatically).
If you choose this approach, it's easy to implement your own simple security or custom actions later on, such as making sure a user is logged into your web site before you send the file, or that users can only access files they uploaded themselves, or logging which users download which files, etc. The possibilities are endless ;-)
Take a look at the System.IO.Path class as it has lots of useful functions you can utilise, such as:
Check which characters are invalid in a file name:
System.IO.Path.GetInvalidPathChars();
Get a random file name:
System.IO.Path.GetRandomFileName();
Get a unique, randome filename in the temporary directory
System.IO.Path.GetTempFileName();
I would go with option #3. A table mapping these files with users will provide other uses down the road, it always does. If you use the mapping, the only advantage of appending the user name or id to the file is if you are trying to debug a problem.
I'd probably use a GUID instead of a random number but either would work. The important things in my opinion are
No username as part of the filename as any part of the stored file
Never use the original file name as any part of the stored file
Use a random number or GUID to ensure no duplicate file
Adding an user id to the file will help with manual debugging issues
There is more to this than meets the eye...which I am thinking that you already knew!
What sort of files are you talking about? If they are anything even remotely big or in such quantity that the group of files could be big I would immediately suggest that you add some flexibility to your approach.
create a table that stores the root paths to various file stores (this could be drives, unc paths, what ever your environment supports). It will initially have one entry in it which will be your first storage location. An nice attribute to maintain with this data is how much room can be stored here.
maintain a table of file related data (id {guid}, create date, foreign key to path data, file size)
write the file to a root that still has room on it (query all file sizes stored in a root location and compare to that roots capacity)
write the file using a GUID for the name (obfuscates the file on the file system)..can be written without the file extension if security requires it (sensitive files)
write the file according to its create date starting from the root/year{number}/month{number}/day{number}/file.extension
With a system of this nature in place - even though you won't/don't need it up front - you can now more easily relocate the files. You can better manage the files. You can better manage collections of files. Etc. I have used this system before and found it to be quite flexible. Dealing with files that are stored to a file system but managed from a database can get a bit out of control once the file store becomes so large and things need to get moved around a bit. Also, at least in the case of windows...storing zillions of files in one directory is usually not a good idea (the reason for breaking things up by their create date).
This complexity is only really needed when you have high volumes and large foot prints.
I was wondering what's the best practise for serving a generated big file in classic asp.
We have an application with "export to excel" function that produces 10MB files. The excels are created by just calling a .asp page that has the Response.ContentType set to excel and has an HTML table for the data.
This gives as problem that it takes 4 minutes before the user sees the "Save as..." dialog.
My current solution is to call an .asp page that creates the excel on the server with AJAX and lets the page return the URL of the generated document. Then I can use javascript to display the on the original page.
Is this easy to do with classic asp (creating files on server with some kind of stream) while keeping security in mind? (URL should make people be able to guess the location of other files)
How would I go about handling deleted the generated files overtime? They have to be deleted periodicly as the data changes in realtime.
Thanks.
edit: I realized now that creating the file on the server will probably also take 4 minutes...
I think you are selecting a complex route, when the solution is simple enough (Though I may be missing some requirements)
If you to generate an excel, just call an asp page that do the following:
Response.clear
Response.AddHeader "content-disposition", "attachment; filename=myexcel.xls"
Response.ContentType = "application/excel"
'//write the content of the file
Response.write "...."
Response.end
This will a start a download process in the browser without needing to generate a extra call, javascript or anything
See this question for more info on the format you will choose to generate the excel.
Edit
Since Thomas update the question and the real problem is that the file take 4 minutes to generate, the solution could be:
Offer the user the send the file by email (if this is a workable solution in you server or hosting).
Generate the file async, and let the user know when the file generation is done (with an ajax call, like SO does when other user have added an answer)
To generate the file on the server
'//You should change for a random name or something that makes sense
FileName = "C:\temp\myexcel.xls"
FileNumber = FreeFile
Open FileName For Append As #FileNumber
'//generate the content
TheRow = "...."
Print #FileNumber, TheRow
Close #FileNumber
To delete the temp files generated
I use Empty Temp Folders a freeware app that I run daily on the server to take care of temp files generated. (Again, it depends on you server or hosting)
About security
Generate the files using random numbers or GUIds for a light protection. If the data is sensitive, you will need to download the file from a ASP page, but I think that you will be in the same problem again...(waiting 4 minutes to download)
Read file using FSO.
Set headers for Excel file-type, name according to file read and for download (attachment)
Flush response after headers are set. The client should display "save as" dialogue.
Output FSO to response. Client will download file and see progress bar.
How do you plan to generate the Excel? I hope you don't plan to call Excel to do that, as it is unsupported, and generally won't work well.
You should check to see if there are COM components to generate Excel that you can call from Classic ASP. Alternatively, add one ASP.NET page for the purpose. I know for a fact that there are compoonents that can be called from ASP.NET pages to do this. Worse come to worst, there's an Excel exporter component from Infragistics that works with their UltraWebGrid control to export. The grid need not be visible in order to accomplish this, but styles in the grid translate to styles in the spreadsheet. They also allow you to manipulate the spreadsheet programmatically.
I'm building a listing/grid control in a Flex application and using it in a .NET web application. To make a really long story short I am getting XML from a webservice of serialized objects. I have a page limit of how many things can be on a page. I've taken a data grid and made it page, sort across pages, and handle some basic filtering.
In regards to paging I'm using a Dictionary keyed on the page and storing the XML for that page. This way whenever a user comes back to a page that I've saved into this dictionary I can grab the XML from local memory instead of hitting the webservice. Basically, I'm caching the data retrieved from each call to the webservice for a page of data.
There are several things that can expire my cache. Filtering and sorting are the main reason. However, a user may edit a row of data in the grid by opening an editor. The data they edit could cause the data displayed in the row to be stale. I could easily go to the webservice and get the whole page of data, but since the page size is set at runtime I could be looking at a large amount of records to retrieve.
So let me now get to the heart of the issue that I am experiencing. In order to prevent getting the whole page of data back I make a call to the webservice asking for the completely updated record (the editor handles saving its data).
Since I'm using custom objects I need to serialize them on the server to XML (this is handled already for other portions of our software). All data is handled through XML in e4x. The cache in the Dictionary is stored as an XMLList.
Now let me show you my code...
var idOfReplacee:String = this._WebService.GetSingleModelXml.lastResult.*[0].*[0].#Id;
var xmlToReplace:XMLList = this._DataPages[this._Options.PageIndex].Data.(#Id == idOfReplacee);
if(xmlToReplace.length() > 0)
{
delete (this._DataPages[this._Options.PageIndex].Data.(#Id == idOfReplacee)[0]);
this._DataPages[this._Options.PageIndex].Data += this._WebService.GetSingleModelXml.lastResult.*[0].*[0];
}
Basically, I get the id of the node I want to replace. Then I find it in the cache's Data property (XMLList). I make sure it exists since the filter on the second line returns the XMLList.
The problem I have is with the delete line. I cannot make that line delete that node from the list. The line following the delete line works. I've added the node to the list.
How do I replace or delete that node (meaning the node that I find from the filter statement out of the .Data property of the cache)???
Hopefully the underscores for all of my variables do not stay escaped when this is posted! otherwise this._ == this._
Thanks for the answers guys.
#Theo:
I tried the replace several different ways. For some reason it would never error, but never update the list.
#Matt:
I figured out a solution. The issue wasn't coming from what you suggested, but from how the delete works with Lists (at least how I have it in this instance).
The Data property of the _DataPages dictionary object is list of the definition nodes (was arrived at by a previous filtering of another XML document).
<Models>
<Definition Id='1' />
<Definition Id='2' />
</Models>
I ended up doing this little deal:
//gets the index of the node to replace from the same filter
var childIndex:int = (this._DataPages[this._Options.PageIndex].Data.(#Id == idOfReplacee)[0]).childIndex();
//deletes the node from the list
delete this._DataPages[this._Options.PageIndex].Data[childIndex];
//appends the new node from the webservice to the list
this._DataPages[this._Options.PageIndex].Data += this._WebService.GetSingleModelXml.lastResult.*[0].*[0];
So basically I had to get the index of the node in the XMLList that is the Data property. From there I could use the delete keyword to remove it from the list. The += adds my new node to the list.
I'm so used to using the ActiveX or Mozilla XmlDocument stuff where you call "SelectSingleNode" and then use "replaceChild" to do this kind of stuff. Oh well, at least this is in some forum where someone else can find it. I do not know the procedure for what happens when I answer my own question. Perhaps this insight will help someone else come along and help answer the question better!
Perhaps you could use replace instead?
var oldNode : XML = this._DataPages[this._Options.PageIndex].Data.(#Id == idOfReplacee)[0];
var newNode : XML = this._WebService.GetSingleModelXml.lastResult.*[0].*[0];
oldNode.parent.replace(oldNode, newNode);
I know this is an incredibly old question, but I don't see (what I think is) the simplest solution to this problem.
Theo had the right direction here, but there's a number of errors with the way replace was being used (and the fact that pretty much everything in E4X is a function).
I believe this will do the trick:
oldNode.parent().replace(oldNode.childIndex(), newNode);
replace() can take a number of different types in the first parameter, but AFAIK, XML objects are not one of them.
I don't immediately see the problem, so I can only venture a guess. The delete line that you've got is looking for the first item at the top level of the list which has an attribute "Id" with a value equal to idOfReplacee. Ensure that you don't need to dig deeper into the XML structure to find that matching id.
Try this instead:
delete (this._DataPages[this._Options.PageIndex].Data..(#Id == idOfReplacee)[0]);
(Notice the extra '.' after Data). You could more easily debug this by setting a breakpoint on the second line of the code you posted, and ensure that the XMLList looks like you expect.