User Generated Content - dynamically retrieve popular content - tridion

Looking at the documentation for UGC in Tridion SP1 2011, is it possible to dynamically query for "popular" content - so return all pages or components ordered by rating or number of comments?
The UGC commands seem to deal with comments/rating on an individual page/component - but not querying for content based upon that data.
Is something available in CD Web Service when you install UGC?
Cheers

I can definitely answer this part of your question:
Is something available in CD Web Service when you install UGC?
Yes. When you install UGC, your CD Web Service will get new collections for these UGC item types:
Comments
ItemStats
Ratings
Users
You can get the most popular items like this:
.../odata.svc/ItemStats?$orderby=AverageRating desc
If you filter it first and then limit the number of results it'll probably result in a slightly faster query:
.../odata.svc/ItemStats?$orderby=AverageRating desc&$filter=AverageRating gt 0.0&$top=5
I expect that ItemStats are probably also available through the Java and .NET APIs.

Related

Has anyone displayed a Salesforce Dashboard component on WordPress site? If so, how?

I work for a nonprofit which help disabled military veterans. We have all our participants register with us using Salesforce as the repository of their registrations. We have dashboard components in Salesforce Lighting which totals up the number of active participants we have. I would like to display the component on our WordPress site but have never done anything like that before. I was hoping to find someone who has done something like that and offer some direction on how to go about doing it.
I tried looking up WordPress plugins which integrate with Salesforce. Most seem to be geared towards sending registrations back and forth but not displaying information. From a little bit of research, it seems like coding might need to be involved. Maybe doing a REST API with a Post option which will send the data through an HTTP URI? But to my understanding is that it would require WordPress to be an API. I am sure there are gaps in my logic.
I dont have an extensive amount of programing language experience but am willing to learn. I have taken a few Java and JavaScript classes in school.
I have not attempted this yet. I am just looking for feedback and direction.
Few options here, in no specific order...
Do Wordpress users have real Salesforce accounts or is their data simply stored in SF? Ask your Salesforce admin if there's a "customer community" configured (if your SF org is really old he might refer to it as customer portal). Communities offer nice way of exposing SF to poeple who don't need full SF user licenses. Think like collaborating with real SF users on "My Cases", viewing reports & dashboards... But for this you'd really need people logged in to SF so it won't work if you want just something anonymous. Some more info
Another option might be using Sites (Visualforce pages that expose SF data to guest users). Think like displaying a product catalog, FAQ, web-to-lead form or some other generic "contact us" page that's anonymous. So if you have SF developer (or admin with good copy-paste skills) you could use some Visualforce charts. They can be 100% coded (like this) or fed data from a report (like this) so it's simpler for admin to change the report filters or something without really writing code. Not sure if the simple route will work on a Site, there are some old answers that say "No", you might have to try it out. Worst case you'd need Apex code (or JavaScript) to query SF for results and display them. And display that SF Site page as <iframe> in Wordpress.
A slight twist on the Sites option - do you use Chatter (bit like Twitter inside SF)? There's way to take a snapshot of a report when a milestone has been met and post it to chatter ("congrats for hitting X participants"). And embed feeds on Visualforce pages too. Docs
What SF edition you're on (Group/Professional/Enterprise...)? If you have API access to Salesforce you could query the info yourself from Wordpress and display it using whatever charting library's easiest for you (Google Charts, Flot...). There are tons of examples how to connect to SF from PHP (or maybe you could cannibalize a WP plugin). Technically it's one POST message to log in to SF and one GET to run a query (something as simple as SELECT COUNT() FROM Contact WHERE isActive__c = true?)
That'd be more or less everything in terms of pulling data out of Salesforce. I mean if you have API access enabled you can slice & dice it how you want, extract data with raw PHP code or use some middleware but overall idea doesn't change. Write queries yourself or use "Analytics API" to access report results (so your administrator has power to change it without coding)...
So how about pushing? SF could notify you about current participants count. At scheduled intervals or even realtime. That'd be "just" raw data though, you'd have to write visualisation yourself.
Plenty of options here
workflow rules (code-free), sends XML message to specified URL so you'd need a WP page that can "capture" the result. Could be sent on creation of new record or update of existing. Won't give you totals, it'd be data related to that particular record so you'd have to build kind of +1 / -1 counter... Or if you use a report + analytic snapshot (helper object to store report results) and have workflow on that - that could be really close to what's needed.
scheduled apex job to run some queries and send the results to you. Again - you'd need a WP url that can be called from SF
if there's a CometD plugin for Wordpress you should look at Salesforce Streaming API, Platform Events or (newer and even simpler to configure) Change Data Capture. Basically you "subscribe" to a topic (a SF query) and whenever SF data changes and SF decides it'd change the results of the query - it'd push the results to you. It's almost realtime. Too much to write about them, perhaps best if you'd try to click through some trailheads - SF self-paced training courses:
https://trailhead.salesforce.com/en/content/learn/modules/api_basics/api_basics_streaming
https://trailhead.salesforce.com/en/content/learn/modules/change-data-capture
https://trailhead.salesforce.com/en/content/learn/modules/platform_events_basics

What the best clockify API endpoint to get time entries of (grouped by) saved reports?

Asking here, after asking to Clockify support.
Trying to extend some of clockify capabilities to create extra reporting for our clients,
I’ve been playing with your API and specifically: the enpoint /reports/{reportsId}
• My goal:
Get all the time entries of a specific "saved report” (usually saved by our Project Managers)
• What I EXPECT from "/reports/{reportsId}”:
To get all the info and entities (users, time entries, projects, etc.) only regarding that particular reportId
• What I GET from "/reports/{reportsId}”:
Lots of info regarding the whole workspace, and I only see summaryReport
as more “specific to the saved report itself”...
• Questions:
Is this the correct behavior?
How do you filter down time entries of specific reports in URLs like https://clockify.me/bookmarks/BOOKMARK_HASH_HERE ?
Do you only call "/reports/{reportsId}” and filter down on client-side? (it seems to me that way, exploring the Network tab)
If that’s the way, what’s the point of calling the report endpoint? Only for the summaryReport object?
3- Is "/reports/{reportsId}” the best endpoint I can use to reach my goal? …or which way would you recommend me?
summaryReport.timeEntries will contain all the individual time entries from that particular report. Each entry has a user, project, client, time etc. Grouping by project is done on the client.
I'm not sure I fully understand your specific problem though. Are you suggesting the entries you get from the report endpoint do not belong to the given report?

How to integrate Recombee with ASP.NET

How do I integrate recombee with ASP.NET to display results in a list or grid view, as well as connect a book crossing dataset to it?
Recombee provides a .NET SDK (https://github.com/recombee/net-api-client).
The github page contains an example how to upload items catalog (your dataset) via it to Recombee. It also shows how the recommendations are requested.
Recombee API returns by default ids of the recommended items, but if you specify parameter returnProperties=true in the recommendation requests, it will also return properties of the items, which you can use for filling your list/grid view.
Another source of information is the Getting Started section of the Recombee docs.

Best approach for fetching news from websites?

I have a function which web-scraping all latest news from a website (approximately 10 news and the number of news is up to that website). Note that the news are in chronical order.
For example, yesterday I got 10 news and stored in database. Today I get 10 news but there are 3 news that are not available from yesterday (7 news stayed the same, 3 new).
My current approach is to extract each news till I find an old news (the 1st among 7 news) then I stop extracting and only update the field "lastUpdateDate" of the old news + add new news to the database. I think this approach is somehow complicated and it takes time.
Actually I'm getting news from 20 websites with same content structure (Moodle) so each request will last about 2 minutes, which my free host doesn't support.
Is it better if I delete all the news and then extracting everything from the start (this actually increments a huge amount of the ID numbers in the database)?
First, check to see if the website has a published API. If it has one, use it.
Second, check the website's terms of service, which may specifically and explicitly disallow scraping the website.
Third, look at a module in your programming language of choice that handles both the fetching of the pages and the extraction of the content from the pages. In Perl, you would start with WWW::Mechanize or Web::Scraper.
Whatever you do, don't fall into the trap that so many who post to StackOverflow fall into: Fetching the web page, and then trying to parse the content themselves, most often with regular expressions which is an inadequate tool for the job. Surf the SO tag html-parsing for tales of sorrow from those who have tried to roll their own HTML parsing systems instead of using existing tools.
Its depend on requirement if you want to show old news to the users or not.
For scraping you can create a custom local script for cron job which will grab the data from those news websites and will store into database.
You can also check through subject if its already exist of not.
Final make a custom news block which will show all the database feed.

searching and indexing weighted tags / category ratings, ideally in the plone catalog

My use case is that content objects can have weighted tags:
ArticleA: java 5, MySQL 3, php 1
ArticleB: crypto 3, MySQL 2
ArticleC: plone 5, networking 1, security 4, agile 1
ArticleD: plone4, MySQL3, php 3
...
The list of tags is user extendible, the range of values is fixed to e.g. 1 - 5
Now: how can I can I answer the following questions (ideally using portal_catalog):
show all Articles that are tagged with java>2 and MySQL>3
whats the average value for MySQL
whats the highest rated plone Article?
show all Articles that contain 'foobar' and are about plone
Possible solutions that come to my mind or were suggested so far are:
go SQL
create extra 'rating' content types that are indexed in a sperate catalog (pretty much like
references)
encode the rating into 'java3', 'java4', 'java5', stick them into a KeywordIndex and check if
AdvancedQuery can search them [update: yes, Between('Subject','plone3','plone5') works]
write a custom PluginIndex
I guess though that others had the same task before. Any ideas on how to (best) move forward on this?
This problem can not be solved with Plone out-of-the-box. It would be possible to search for the differenct aspects using different searches and some filtering and aggregation on the application side with custom coding....might be tricky and inefficient. You may look into the SOLR integration with Plone (collective.solr). SOLR should support most of the functionality
out of the box. Especially faceted search is a build-in feature of SOLR that you get for free. However SOLR is another brick inside your setup and might be oversized for smaller sites. In addition the SOLR integration and the SOLR buildout recipe always appeared a bit fragile.

Resources