Moving a Plone site to Radiant CMS - plone

I've got a large Plone site that I'm moving to Radiant (I love Plone, but it got unusably slow as the site grew). I need a little help with some of the equivalences.
How do I do user permissions, signup, login, etc? I'd love to be able to have content that's:
Viewable by anyone (status "Published")
Viewable by authenticated users (status "Semi-published")
Viewable by "staff" (status "Restricted")
The Plone content has a few attributes that I don't find in Radiant, specifically "Creators" and "Contributors". I could conflate them down to a list of authors if I need to, but Radiant seems to want "author" to be used for the login that instantiated the content. How would I handle extending the page model to handle them?
How do I do site search? I'd love to be able to search by either free text or by assigned keywords (which come from a particular taxonomy, as it turns out).
The biggest issue is transferring content. I can ftp most of the content out of the Plone site. For HTML documents that'll mean that I've got files that look like:
id: a-banking-system-we-can-trust
title: A Banking System We Can Trust
excludeFromNav: False
subject: Alternate economy
description: Turn all financial firms into mutual funds.
contributors: Forbes
creators: Laurence J. Kotlikoff
Edward Leamer
effectiveDate: None
expirationDate: None
language:
rights:
creation_date: 2009/05/05 21:01:58.795 GMT-4
modification_date: 2009/05/05 21:06:39.695 GMT-4
relatedItems:
allowDiscussion: None
Content-Type: text/html
<h1>A Banking System We Can Trust</h1>
How might I take a whole (Linux) directory tree full of files like that (as well as some images and PDFs) and turn them into Radiant content, complete with the correct metadata (that's the first few lines of the files, when you use FTP to get them).

I've used a product called ilrt.contentmigrator recently to migrate a Plone 2.x site to Plone 4.x, pretty simple. You really should check out Plone 4, the speed difference is significant!

Related

How to Limit Access in an Amazon S3 Bucket to a Specific Folder Containing Course Information Through WooCommerce

Rookie S3 user here looking to troubleshoot a problem I encountered while helping some friends with their business. Their business revolves around selling courses and the program they use is WooCommerce and they attach course files through WordPress. The way these courses work is that there is a live video call where people like to join in so the product on WooCommerce initially holds the details for the upcoming call and afterward additional audio and transcripts are added to the product for sale. The problem is that this means people who had bought the course prior to this call would not receive these files unless permission to see them was manually given. As this is redundant and troublesome, my thought was to change the purchase to instead give a link which goes into an Amazon S3 bucket labeled courses and give them access to a specific folder within it. Ideally, this link would let them see new files lives and furthermore would limit the size of data on the website which is hosted on a dedicated server (save some $$$ on hosting fees, 2 birds 1 stone) The problem however is that since I am a complete novice to this style of coding, I am unsure of how to do this although I do think it is possible given an answer is already out there or if I can bull and jam my way through a section of code. The reason I am looking to sort out courses as folders inside a bucket instead of individual buckets is that the number of courses the website currently has is nearing 200 and if an effort was made to change those then it would be well over the 100 bucket limit in addition to being an exercise in repetition. Any advice or help would be greatly appreciated, thanks!
If I understand you correctly, you want to host content on S3, but want to achieve some degree of access control on that content.
The most straightforward way to do this, the one that involves minimal S3 integration, is to presign an S3 url for the user. the presigned url would be good for a limited time and could be generated directly before redirecting the user to that URL by your wordpress site, which would in turn hold aws access credentials.
https://docs.amazonaws.cn/zh_cn/aws-sdk-php/guide/latest/service/s3-presigned-url.html eplains more about this from a php perspective, which I'm guessing is the right lens for you.
This allows some modicum of access control ( the users can still share the document after they've accessed it, but at least it's not just public).
If you don't need access control, you can make the s3 object public and omit the signing altogether.

Has anyone displayed a Salesforce Dashboard component on WordPress site? If so, how?

I work for a nonprofit which help disabled military veterans. We have all our participants register with us using Salesforce as the repository of their registrations. We have dashboard components in Salesforce Lighting which totals up the number of active participants we have. I would like to display the component on our WordPress site but have never done anything like that before. I was hoping to find someone who has done something like that and offer some direction on how to go about doing it.
I tried looking up WordPress plugins which integrate with Salesforce. Most seem to be geared towards sending registrations back and forth but not displaying information. From a little bit of research, it seems like coding might need to be involved. Maybe doing a REST API with a Post option which will send the data through an HTTP URI? But to my understanding is that it would require WordPress to be an API. I am sure there are gaps in my logic.
I dont have an extensive amount of programing language experience but am willing to learn. I have taken a few Java and JavaScript classes in school.
I have not attempted this yet. I am just looking for feedback and direction.
Few options here, in no specific order...
Do Wordpress users have real Salesforce accounts or is their data simply stored in SF? Ask your Salesforce admin if there's a "customer community" configured (if your SF org is really old he might refer to it as customer portal). Communities offer nice way of exposing SF to poeple who don't need full SF user licenses. Think like collaborating with real SF users on "My Cases", viewing reports & dashboards... But for this you'd really need people logged in to SF so it won't work if you want just something anonymous. Some more info
Another option might be using Sites (Visualforce pages that expose SF data to guest users). Think like displaying a product catalog, FAQ, web-to-lead form or some other generic "contact us" page that's anonymous. So if you have SF developer (or admin with good copy-paste skills) you could use some Visualforce charts. They can be 100% coded (like this) or fed data from a report (like this) so it's simpler for admin to change the report filters or something without really writing code. Not sure if the simple route will work on a Site, there are some old answers that say "No", you might have to try it out. Worst case you'd need Apex code (or JavaScript) to query SF for results and display them. And display that SF Site page as <iframe> in Wordpress.
A slight twist on the Sites option - do you use Chatter (bit like Twitter inside SF)? There's way to take a snapshot of a report when a milestone has been met and post it to chatter ("congrats for hitting X participants"). And embed feeds on Visualforce pages too. Docs
What SF edition you're on (Group/Professional/Enterprise...)? If you have API access to Salesforce you could query the info yourself from Wordpress and display it using whatever charting library's easiest for you (Google Charts, Flot...). There are tons of examples how to connect to SF from PHP (or maybe you could cannibalize a WP plugin). Technically it's one POST message to log in to SF and one GET to run a query (something as simple as SELECT COUNT() FROM Contact WHERE isActive__c = true?)
That'd be more or less everything in terms of pulling data out of Salesforce. I mean if you have API access enabled you can slice & dice it how you want, extract data with raw PHP code or use some middleware but overall idea doesn't change. Write queries yourself or use "Analytics API" to access report results (so your administrator has power to change it without coding)...
So how about pushing? SF could notify you about current participants count. At scheduled intervals or even realtime. That'd be "just" raw data though, you'd have to write visualisation yourself.
Plenty of options here
workflow rules (code-free), sends XML message to specified URL so you'd need a WP page that can "capture" the result. Could be sent on creation of new record or update of existing. Won't give you totals, it'd be data related to that particular record so you'd have to build kind of +1 / -1 counter... Or if you use a report + analytic snapshot (helper object to store report results) and have workflow on that - that could be really close to what's needed.
scheduled apex job to run some queries and send the results to you. Again - you'd need a WP url that can be called from SF
if there's a CometD plugin for Wordpress you should look at Salesforce Streaming API, Platform Events or (newer and even simpler to configure) Change Data Capture. Basically you "subscribe" to a topic (a SF query) and whenever SF data changes and SF decides it'd change the results of the query - it'd push the results to you. It's almost realtime. Too much to write about them, perhaps best if you'd try to click through some trailheads - SF self-paced training courses:
https://trailhead.salesforce.com/en/content/learn/modules/api_basics/api_basics_streaming
https://trailhead.salesforce.com/en/content/learn/modules/change-data-capture
https://trailhead.salesforce.com/en/content/learn/modules/platform_events_basics

Update information on a MediaWiki page from an RSS feed?

I administer a mediawiki for a school group. We have a website that students complete projects in for virtual rewards. I want to put counters on a page of my wiki with statistical information (cumulative exp/coins, assignments completed, most productive student, etc.) about each of the seven groups that the students are divided up into. It would be simple enough if the two sites were hosted on the same server, but they are not. I figure that an RSS feed with the statistical information may be a good way to get info from website server to wiki server. How would I reference the information from the RSS feed in the wiki page?
Just to make sure my idea is clear, I would put in the feed something along the lines of:
[ATLAS]
exp=15000
coins=7500
eva=350
ip=150
dmg=500
[CERES]
exp=13000
;and so on
I would like to reference that in the wiki page. Is it doable?

Simple & Legitimate way to Cloak a URL?

I am not a professional with websites - just an amateur DIY dabbler, so apologies in advance if this is rather simplistic.
I have three Wordpress sites. For simplicity, let's say they are widgets.com, blue-widgets.com and red-widgets.com.
With Google AdWords, this works well as I send all searches for 'red widgets' to red-widgets.com, searches for 'blue widgets' to blue-widgets.com and everything else to the generic widgets.com.
I am now targeting the Chinese market using the AdWords equivalent from the main search engine in China, which is Baidu.com.
Whereas with AdWords, it's pay-as-you-go and it doesn't matter which site you send the traffic to, Baidu is hard work. For companies outside China they need around $3600 pre-payment. For that, you are only able to promote one website. If I wanted to promote all three, I would have to set up three accounts and send them $10,800 (which is more credit than I am likely to spend with them in several years!)
So I have set up an account just for widgets.com Javascript redirects are specifically disallowed.
What I would like to do is to set up third level domains for red.widgets.com and blue.widgets.com and have them display the home pages for red-widgets.com and blue-widgets.com respectively.
Is there a simple way that I could achieve this and how?
I wonder if you can use many URLs, such as the following:
widgets.com
widgets.com?red_widget=1
widgets.com?blue_widget=1
If you can use such URL, you should be able to redirect the URL through a specific PHP code. Here is an example of a code that will redirection the initial URL to a new one:
if (isset($_GET['red_widget']) && $_GET['red_widget']=='1') {
header('Location: http://red-widget.example.com/');
exit;
}
Hope this helps.

Best approach for fetching news from websites?

I have a function which web-scraping all latest news from a website (approximately 10 news and the number of news is up to that website). Note that the news are in chronical order.
For example, yesterday I got 10 news and stored in database. Today I get 10 news but there are 3 news that are not available from yesterday (7 news stayed the same, 3 new).
My current approach is to extract each news till I find an old news (the 1st among 7 news) then I stop extracting and only update the field "lastUpdateDate" of the old news + add new news to the database. I think this approach is somehow complicated and it takes time.
Actually I'm getting news from 20 websites with same content structure (Moodle) so each request will last about 2 minutes, which my free host doesn't support.
Is it better if I delete all the news and then extracting everything from the start (this actually increments a huge amount of the ID numbers in the database)?
First, check to see if the website has a published API. If it has one, use it.
Second, check the website's terms of service, which may specifically and explicitly disallow scraping the website.
Third, look at a module in your programming language of choice that handles both the fetching of the pages and the extraction of the content from the pages. In Perl, you would start with WWW::Mechanize or Web::Scraper.
Whatever you do, don't fall into the trap that so many who post to StackOverflow fall into: Fetching the web page, and then trying to parse the content themselves, most often with regular expressions which is an inadequate tool for the job. Surf the SO tag html-parsing for tales of sorrow from those who have tried to roll their own HTML parsing systems instead of using existing tools.
Its depend on requirement if you want to show old news to the users or not.
For scraping you can create a custom local script for cron job which will grab the data from those news websites and will store into database.
You can also check through subject if its already exist of not.
Final make a custom news block which will show all the database feed.

Resources