I am trying to use flexigrid(jquery) to display my data.. I have downloaded some examples but somehow i have not been able to figured out "How to bind the data to flexigrid" which i am fetching using ajax, and which currently is in array.. Basically i dont know how tryconvert my data in Json object.. Or is their any other way(I dont want to use php file to return my data).. and also please refer some useful links which i can follow to learn flexigrid in detail right from basics.. Most of the sites which i refered dont go in much details about flexigrid... I know, this 1 is very stupid, but I cant help as I have just beginned using flexigrid...
I think this explains everything pretty well. Flexgrid asp.net mvc
Related
We have a couple of relatively simple websites running on Adobe CQ 5.5 that were developed by a third party. I'm pretty familiar with how CQ works, but I'm working with somebody else's code here and I need to be able to search through all components in the system for a particular string.
The issue is that I can't seem to find a way to search across all of the various .jsp files stored with the various system components. I would have figured that the query tool in CRXDE Lite would have done the trick with something like this:
/jcr:root//*[jcr:contains(., 'Find this exact string in a JSP')] order by #jcr:score
But I've had no luck.
What I am looking for is some sort of global search that includes JSP files. Is that possible? Were I using a regular Java system, any IDE worth the download would be able to do this.
Thanks.
Might not be easiest way, but you can use the VLT tool to checkout the repository into your filesystem. Then you can lookup using whatever tool you prefer. It might even be faster in the long run
I don't have the actual answer but I suppose the JSPs are indexed via a filter that strips out some of their content.
It should be possible to configure the repository to index them as is instead, based on the info at http://wiki.apache.org/jackrabbit/IndexingConfiguration and http://jackrabbit.apache.org/jackrabbit-text-extractors.html
Sorry about the vagueness of this answer - I know the basic principles but to provide the details I would need more time than I can afford now ;-)
I'm not an ASP.NET programmer, but, as it happens in life, I had to do some minor projects using it. Now came another one in which I have to implement some custom solutions and I haven't figured it out yet - I need some tip or maybe a piece of advice like "don't go that way" ;)
Previously it was simple - there was a table in DB, there was an adequate model and a view that worked with it - worked like charm. Now it's a little bit more complicated.
The "site" is going to contain, shortly and generally speaking, a survey - but a fully configurable one, unfortunately. In another product there's gonna be a configuration manager that will allow user to define pages, block types, questions, steps and so on and will generate an XML.
For the time being, in accordance with the specification, in the site's database I'm going to have only one table which will contain just a key and the XML generated by the configurator (and maybe some additional, not important information). Now - I need to parse this XML and build the site containing pages and other elements corresponding to it.
And that WOULD not be a problem, but I don't really know how to work that way using asp.net + mvc and can't find any piece of advice that would help me anyhow. Should I create an object that would somehow fake being a model and allow me to work for example on a dataset generated from XML? Or just create a model of the mentioned table and work with the XML directly on the view (I don't like even such an idea itself)? Or - having to do something like that - just give up on MVC and use only "clear" ASP.NET? Or maybe something else?
I'll be very grateful for any help.
And I hope I described what I need understandably ;)
If the XML documents have a schema defined then you can easily generate a class that matches the document using the xsd.exe tool. The document can then be deserialized into an instance of that class using existing functionality in the .Net framework. Just google .Net Xml serialization :-)
Now, if you don't have a schema you could create one if you are sure that you know the format of the Xml. Alternatively you could create a class that matches the format you expect to get and then parse the Xml manually. This last option is much more work, so I wouldn't recommend it.
In any case, the class you end up with should contain all the data you need from the Xml document and can then be used as the Model in your MVC page. As long as you can use the standard Xml deserialization technique then this should be quite easy and painless.
I'm fairly new to web development and never before did i do any screen-scraping nor web-crawling, but yesterday a friend of mine asked me if i would be able to grab some data from this website, which is not mine, nor his, but the data is publicly available even for download.
The problem with the data is, it's available only as one file per one date or company, rather than one file for multiple dates or companies, which involves a lot of tedious 'clicking trough' the calendar and so he thought it would be nice if i would be able to create some app that could grab all the data with one click and output it in one single file or something similar..
The website uses aspx webFrom with __doPostBack to retrieve the data for different dates, even the links to download the data in XSL aren't the usual "href=…" links, they are, i assume, references for some asp script…
To be honest the only thing i tried was PHP cURL which didn't work, but since i tried cURL for the first time, i don't even know if it didn't work because it is not possible with cURL, or just because i don't know how to work with it.
I am only somewhat proficient in PHP and JavaScript, but not in ASP, though i would't mind learning something new.
So my question is..
Is it at all possible to grab the data from a website like this? and if it is, would you be so kind as to give me some hints on how to approach this kind of problem?
the website, again, is here http://extranet.net4gas.cz/capacity_ee.aspx
Thanks
C# has a nice WebClient class to do the job:
// Create web client.
WebClient client = new WebClient();
// Download string.
string value = client.DownloadString("http://www.microsoft.com/");
once you have the page html in a string you use regular expressions to scrape the content you are looking for.
here is a very basic regular expression to give a hint:
Regex regex = new Regex(#"\d+");
Match match = regex.Match("hello here 10 values");
if (match.Success)
{
Console.WriteLine(match.Value);
}
Marosko, as you said the data on website is open for public, so for sure you can scrape data out of it. Now, it is to decrease the manual click through dates and scraping data out of it. I personally don't have much idea about how Curl will work but I am sure it will involve a lot of coding. I would rather suggest you to automate the entire process using some automation tool, like a software application. Try Automation Anywhere, I bought it few months back for some data extraction purpose and it worked very well. It is automated and you can check the screen scraping capabilities it shows. Its my favorite :)
Charles
I followed this simple tutorial and created a nested repeater.
This tutorial is simple enough so i could easily create something like that.
But I have different XML structure in my organisation which i can't change. My XML structure is repeated structure of this.
<pupil>
<academicYear>2011/2010</academicYear>
<grade>Kindergarten 1</grade>
<class>class 1</class>
<name>emma</name>
<admissionDate>01/05/2010</admissionDate>
<language>English</language>
<CountryofBirth>United Kingdom</CountryofBirth>
<fullName>emma watson</fullName>
</pupil>
I would like to see academicYear, grade, class, name, admissiondate, etc As Titles.
And below each title, there should be coresponding data about it.
Eg.
*Academic Year
-2011/2010
-2010/2009
*Grade
-kingdergarten1
-kingdergarten2
-kingdergarten3
I don't post all my code again coz it's same as in this tutorial. Please don't tell me why don't u go and ask the guy who made that tutorial. I found people here are very nice and always helpful.
Thanks so much.
Having looked at the tutorial and your XML, the big difference between your XML and the example given on the tutorial is that yours isn't nested XML.
I'd also dispute your assertion that you cannot change the XML structure. Sure, you might not be able to change what you get from the service that is providing you with the XML, but there is no reason why you couldn't reorganise the XML you are receiving into a nested XML document that is more compliant with your intentions.
I am trying out DevExpress Xtrareports, but have asimple problem that I am not able to find an answer to.
In an Asp.net website project, I want to add a business object to my report as a datasource and then drag and drop the fields from my new business object data source onto the report designer.
Can some one help explain how i do this.
I asssume I have just missed something.
Thanks
Ian
One way to solve this would be to give the designer a DataSet with one or more DataTables representing your business object. You would of course have to write some code to convert your business object to the data table and back.
As a side note DevExpress have a really good support forum and I have always gotten answers to my questions regarding their products. I recommend you try your question there unless you get a really good answer.