Valuable? Fields support in REST API - symfony

I've read some topics about GraphQL, and one of the great features I like is you can specify fields you want (Client End).
I'm thinking maybe I can also add it into REST API. I look around and find there has already such specification: fetching-sparse-fieldsets
So I'm trying to add such feature in Symfony. (Especially, in FOSRestBundle+JSMSerializer).
But I'm not quite sure whether it is valuable or not. Can someone give you advice?

That is a question you should ask to your API users and your customer. It might be useful for the API user, but it has a lot of downsides:
By building it yourself, you spend a lot of time by developing, testing and maintaining this 'optional' feature. Is the customer willing to pay? (YAGNI)
As mentioned above, you must maintain the code for this feature, you cannot remove it unless you decide to release a new API version. As external packages change, your code might require an update aswell.
It can become difficult when troubleshooting. API users might be trying to retrieve a field that isn't specified in the API URL (ofcourse these issues arise after project transfers to other developers). Questions can come up why some data isn't available, but is present in API documentation.
Especially the first one in the list is incredibly important. Don't build features that the customer does not need. Personally I always return all accessible data available, even null values. The API users decide what to do with that data. Bandwidth isn't such a problem these days I guess.

Related

Is there built-in alternative to google analytics in Liferay

I'm looking for a portlet like-solution that would collect and report usage analytics in Liferay... but google analytics is not an option, unfortunately.
Stats by community, group, session tracking, apart from the usual bounce and exit rates, referrals, origin, etc. I know I'm kind of asking the reinvention of the wheel, but there are plenty of usage data that can be collected by Liferay that google can't. I've already checked PiWik, and it looks very impressive.
Any suggestions? TIA,
As of 2015 there is Audience Targeting plugin, which (at least for Liferay 6.2) comes bundled with analytics-api / analytics-hook modules, which collect some useful analytics data. Mind now:
So far it doesn't look like there is any standalone use for them as they were introduced, I believe, to enable the content visited, page visited and other such rules in the Audience Targeting itself; you can't see the raw events in any of the provided portlets
The events are stored as rows in a SQL database, so I would be concerned about it's performance in the long run (with thousands of clicks every minute etc.), although I say this purely theoretically as I haven't done any tests myself nor checked if there are some performance enhancing measures implemented
What you can do, however, is to put together your own portlet which would create some graphs etc. based on the data stored in CT_Analytics_AnalyticsEvent table.
Right now, I dont think there is any out of the box feature available for this, you might need to create this. There can be 2 things
1) You need to create a javascript library if you need realtime/web analysis (this is same like creating google analytics lib)
2) This option is quite easy. Liferay stores everything in db, you can have a report portlet which will show the report based on the data. We did this for one project where we were tracking the session ids/ip and logged in user details for portlets.
To achieve point 2) you can create new Liferay service, which will be used to store these data and retrieve.
Hope this helps
You already mention Piwik, which is similar to google analytics. You probably have your own theme (almost everybody changes the appearance to look like their own site) and it's quite appropriate to place the relevant piwik-stats-snippet in there.
You can also, as Felix suggests, mine your log files. Liferay stores some data, your webserver access logs also are quite worth to mine. And, of course, you can change your theme to log even more for every page access, just take care that you don't create a performance bottleneck by writing too much during one page request.
So, coming back to your question: Built-in like google analytics: No. Easily integrateable (like Piwik): Yes, of course. Completely customizeable: Yes, of course.
Edit: It just happens that David has created and documented an integration that makes using Piwik even easier

Filtering Data in ASP.NET Web Services

I've been using this site for quite a while, usually being able to sort out my questions by browsing through the questions and following tags. However, I've recently come across a question that is rather hard to lookup amongst the great number of questions asked - a question I hope some of you might be able to share your opinion on.
As my problem is a bit hard to fit into a single line, going in the title, I'll try to give a bit more details on the problem I've encountered. So, as the title says I need to filter, or limit, some of the response data my standard ASP.NET Soap-based Web service returns on invoking various web methods. The web service is used to return data used by other systems (a data repository more or less), where the client today is able to specify a few parameters on how the data should be filtered and in return a full-set of data back.
Well, easy enough I thought, just put additional filtering options on the existing web methods which needs a bit more filtered applied, make adjustments on the server-side and we are all set to go - well, unfortunately it turned out to be a bit more tricky then this.
The problem I am facing is that I'm working on a web service running in a production environment, which needs to be extended in such that additional filters can be applied to existing web method being invoked w/o affecting the calls already being made by other systems used by the customer using their client stubs. This is where I am a bit troubled, since I can't seem to find a "right solution" on extending the current web service.
Today, the filter is send as a custom data structure which holds information on which data should filtered, but I am not sure if I can simply just add more information to this data structure w/o breaking code at the clients? One of my co-workers suggested that I could implement a solution where I would extend the web.config on the server-side to hold a section with details on which data should be excluded (filtered out), but I don't find this to be a viable solution long-sighted - and I don't trust customers with such an option since this is likely to go wrong at some point. So the solution I am looking for is a way that I can apply a "second filter" to the data I am requesting from the client so instead of getting a full-set of data back it should only give a fraction, it implemented in such that the filter can be easily modified and it must not affect the current client calls.
Any suggestions on how I should approach this problem?
Thanks!
Kind regards,
E.
A pretty common practice is to create another instance of the application OR use part of the url to signify the version of the endpoint they are connecting to, perhaps the virtual directory is the date. That way old calls will go to the old API and new calls will come in on the new API.
http://api.example.com/dostuff
vs
http://api.example.com/6-7-2011/dostuff

To Develop LMS and Scorm Sequesncing Engine

We want a LMS(coded in ASP.NET/vb.net) which is able to import SCORM packages & display it to learner for viewing content. I am totally new to SCORM and have been shifted to this project. I want to know how can I access SCORM Assessment object's (Test) result, like Learner ID, passed/fail, time.
Can you please guide me what will I need to implement in ASP.NET code to accomplish my goal ?
Task that I have done so far is,
Reading a manifest zip file, unzipping the file and get all information from the file(content name,description,items and launching page) and when user clicks on a particular course a pop up window is launching the page.
I eagerly want to know what I can do next to communicate with the LMS with the APIs. Shall I need to develop my own LMS to get the result,If there is a quiz which is running, all I need to know is the no of questions attempted by the user, whether the user is pass or fail and I need to store all information in the database for individual user so that I can review the result afterwards.
So the task remaining.
Tracking mechanism to deliver the content.
SCORM/LMS sequencing engine that controls the navigation between parts of SCORM conformant course.
Please help.
SLK at codeplex provides a good starting point. However, if you are truly wanting to provide an in-house written SCORM play that is fully compliant, you have a major task ahead of you. In essence there are three party you need to fully develop:
CAM - the unzipping process, which it sounds like you have already achieved.
RTE - the javascript host for SCORM, providing the 8 specified methods. Behind this you also need to implement the SCORM object model, which SLC does help with. If you have implemented all of this, then there should be data entries on the data model that indicate completion etc.
SN - the sequencing and navigation processing. This is significantly the most complex part. I am still in the process of trying to implement this, using SLC, and it is hard. It is the completion of this that will potentially give you more information that will enable you to know what has been done.
it is also worth looking at scorm.com, who are a consultancy, but provide a lot of useful information about the scorm standard.
That is true. SCORM is one of these stadarts where you can implement as little as possible. But you will need some of Javascript with a Backend-Script (JSON to the rescue) so you can track the scorm data, and save it your database.
But let me tell you this: This is the easiest task! Making your own course-creator is a whole other beast.

Access to old, no longer available, feed entries

I am working on a project that requires reliable access to historic feed entries which are not necessarily available in the current feed of the website. I have found several ways to access such data, but none of them give me all the characteristics I need.
Look at this as a brainstorm. I will tell you how much I have found and you can contribute if you have any other ideas.
Google AJAX Feed API - will limit you to 250 items
Unofficial Google Reader API - Perfect but unofficial and therefore unreliable (and perhaps quasi-illegal?). Also, the authentication seems to be tricky.
Spinn3r - Costs a lot of money
Spidering the internet archive at the site of the feed - Lots of complexity, spotty coverage, only useful as a last resort
Yahoo! Feed API or Yahoo! Search BOSS - The first looks more like an aggregator, meaning I'd need a different registration for each feed and the second should give more access to Yahoo's data but I can find no mention of feeds.
(thanks to Lou Franco) Bloglines Sync API - Besides the problem of needing an account and being designed more as an aggregator, it does not have a way to add feeds to the account. So no retrieval of arbitrary feeds. You need to manually add them through the reader first.
Other search engines/blog search/whatever?
This is a really irritating problem as we are talking about semantic information that was once out there, is still (usually) valid, yet is difficult to access reliably, freely and without limits. Anybody know any alternative sources for feed entry goodness?
Bloglines has an API to sync accounts
http://www.bloglines.com/services/api/sync
You have to make an account, subscribe to the feed you want to download, but then then you can download based on Date, which can be way in the past. Not sure of the terms.
The best answer I've found so far, is this: Google reader's unofficial API turns out to have a public access point for their feeds, which means there is no authentication needed. Use is as follows:
http://www.google.com/reader/public/atom/feed/{your feed uri here}?n=1000
replace the text in the squigglies (including the squigglies themselves) with the feed URI you're interested in. More information about the precise arguments can be found here:
http://blog.martindoms.com/2009/10/16/using-the-google-reader-api-part-2/
but remember to use the /public/ url if you don't want to mess with authentication

ASP.NET built in user profile vs. old style user class/tables

I am looking for guidance regarding the best practice around the use of the Profile feature in ASP.NET.
How do you decide what should be kept in the built-in user Profile, or if you should create your own database table and add a column for the desired fields? For example, a user has a zip code, should I save the zip code in my own table, or should I add it to the web.config xml profile and access it via the user profile ASP.NET mechanism?
The pros/cons I can think of right now are that since I don't know the profile very well (it is a bit of a Matrix right now), I probably can do whatever I want if I go the table route (e.g., SQL to get all the users in the same zip code as the current user). I don't know if I can do the same if I use the ASP.NET profile.
Ive only built 2 applications that used the profile provider. Since then I have stayed away from using it. For both of the apps I used it to store information about the user such as their company name, address and phone number.
This worked fine until our client wanted to be able to find a user by one of these fields.
Searching involved looping through every users profile and comparing the information to the search criteria. As the user base grew the search time became unacceptable to our client. The only solution was to create a table to store the users information. Search speed was increased immensely.
I would recommend storing this type of information in its own table.
user profile is a nice clean framework for individual customization(AKA. Profile Properties). (e.g. iGoogle)
the problem of it is its not designed for query and not ideal for data sharing to public user.(you still would be able to do it, with low performance)
so, if you want to enhance the customized user experience, user profile would be a good way to go. otherwise, use your own class and table would be a much better solution.
In my experience its best to keep an the info in the profile to a bare minimum, only put the essentials in there that are directly needed for authentication. Other information such as addresses should be saved in your own database by your own application logic, this approach is more extensible and maintainable.
I think that depends on how many fields you need. To my knowledge, Profiles are essentially a long string that gets split at the given field sizes, which means that they do not scale very well if you have many fields and users.
On the other hand, they are built in, so it's an easy and standardized way, which means there is not a big learning curve and you can use it in future apps as well without needing to tweak it to a new table structure.
Rolling your own thing allows you to put it in a properly normalized database, which drastically improves performance, but you have to write pretty much all the profile managing code yourself.
Edit: Also, Profiles are not cached, so every access to a profile goes to the database first (it's then cached for that request, but the next request will get it from the database again)
If you're thinking about writing your own thing, maybe a custom Profile Provider gives you the best of both worlds - seamless integration, yet the custom stuff you want to do.
I think it is better off using it for supplementary data that is not critical to the user that is only normally important when that user is logging in anyway. Think data that would not break anything important if it was all wiped.
of course thats personal preference but others have raised some other important issues.
Also very useful considering it can be used for an unauthenticated user whose profile is maintained with an anonymous cookie.

Resources