Django CMS : How to get placeholder html content? - django-cms

Model:
from cms.models.fields import PlaceholderField
MyModel(models.Model)
title = models.CharField(max_length=255)
placeholder = PlaceholderField('my_model')
I want to retrieve the placeholder html content in a variable, something like that :
MyModel.objects.get(id=1).placeholder.get_html_content()
How to do that ?

Something like this should work!
Given that you have access to request object:
from django.template import RequestContext
from cms.plugin_rendering import render_placeholder
obj = MyModel.objects.get(id=1)
html = render_placeholder(obj.placeholder, RequestContext(request))
If you don't have access to request object, you can use the RequestFactoryto mock request object
from django.conf import settings
from django.contrib.auth.models import AnonymousUser
from django.test.client import RequestFactory
def get_request(language=None):
request_factory = RequestFactory()
request = request_factory.get('/')
request.session = {}
request.LANGUAGE_CODE = language or settings.LANGUAGE_CODE
# Needed for plugin rendering.
request.current_page = None
request.user = AnonymousUser()
return request

Related

Django CMS plugin nesting --- child doesn't show up in the "structure" interface

I'm trying to create a Django CMS custom plugin that can assemble other plugins.
As far as I can tell, Django CMS can do this using Plugin nesting, and I've followed the examples to create a simple test case.
My expectation is that when you go into the "Structure" tab for a record in the model that has a PlaceholderField that includes the parent plugin, when you add a parent plugin, the pop-up for that model should ALSO have some way to edit/create/add an instance of the child plugin. But it doesn't --- all I see are the fields for the parent plugin and NOTHING about the children (see screenshot below).
Or am I missing the point of Plugin nesting entirely?
models.py:
from django.db import models
from cms.models import CMSPlugin
from cms.models.fields import PlaceholderField
from djangocms_text_ckeditor.models import AbstractText
class CustomPlugin(CMSPlugin):
title = models.CharField('Title', max_length=200, null=False)
placeholder_items = PlaceholderField ('custom-content')
renderer = models.CharField('Renderer', max_length=50, null=True, blank=True,
help_text='This is just to show that a custom renderer CAN be done here!')
class ChildTextPlugin(AbstractText):
pass
cms_plugins.py:
from cms.plugin_base import CMSPluginBase
from cms.plugin_pool import plugin_pool
from django.utils.translation import ugettext as _
from .models import CustomPlugin, ChildTextPlugin
class CMSCustomPlugin(CMSPluginBase):
model = CustomPlugin
name = _('Custom Plugin')
render_template = 'custom/custom_plugin.html'
allow_children = True
def render(self, context, instance, placeholder):
context = super(CMSCustomPlugin, self).render(context, instance, placeholder)
return context
class CMSChildTextPlugin(CMSPluginBase):
model = ChildTextPlugin
name = _('Child Text Plugin')
render_template = 'custom/child_text_plugin.html'
parent_classes = ['CMSCustomPlugin',]
def render(self, context, instance, placeholder):
context = super(ChildTextPlugin, self).render(context, instance, placeholder)
return context
plugin_pool.register_plugin(CMSCustomPlugin)
plugin_pool.register_plugin(CMSChildTextPlugin)
... and the answer is "it was working all the time" --- the interface comes AFTER the screen I posted above is submitted --- the Custom Plugin entry will have a "+" icon, and it's THERE that the children are found.

reading Word file from POST request in grails

I'm trying to write a Groovy script that will post a Word (docx) file to a REST handler on my grails application.
The request is constructed like so:
import org.apache.http.HttpEntity
import org.apache.http.HttpResponse
import org.apache.http.client.methods.HttpPost
import org.apache.http.entity.mime.MultipartEntity
import org.apache.http.entity.mime.content.FileBody
import org.apache.http.entity.mime.content.StringBody
import org.apache.http.impl.client.DefaultHttpClient
class RestFileUploader {
def sendFile(file, filename) {
def url = 'http://url.of.my.app';
DefaultHttpClient httpclient = new DefaultHttpClient();
HttpPost httppost = new HttpPost(url);
MultipartEntity reqEntity = new MultipartEntity();
FileBody bin = new FileBody(file);
reqEntity.addPart("file", new FileBody((File)file, "application/msword"));
def normalizedFilename = filename.replace(" ", "")
reqEntity.addPart("fileName", new StringBody(normalizedFilename));
httppost.setEntity(reqEntity);
httppost.setHeader('X-File-Size', (String)file.size())
httppost.setHeader('X-File-Name', filename)
httppost.setHeader('Content-Type', 'application/vnd.openxmlformats-officedocument.wordprocessingml.document; charset=utf-8')
println "about to post..."
HttpResponse restResponse = httpclient.execute(httppost);
HttpEntity resEntity = restResponse.getEntity();
def responseXml = resEntity.content.text;
println "posted..."
println restResponse
println resEntity
println responseXml.toString()
return responseXml.toString()
}
}
On the receiving controller, I read in the needed headers from the request, and then try to access the file like so:
def inStream = request.getInputStream()
I end up writing out a corrupted Word file, and from examining the file size and the contents, it looks like my controller is writing out the entire request, rather than just the file.
I've also tried this approach:
def filePart = request.getPart('file')
def inStream = filePart.getInputStream()
In this case I end up with an empty input stream and nothing gets written out.
I feel like I'm missing something simple here. What am I doing wrong?
You will need to make two changes:
Remove the line: httppost.setHeader('Content-Type'.... File upload HTTP POST requests must have content type multipart/form-data (set automatically by HttpClient when you construct a multipart HttpPost)
Change the line: reqEntity.addPart("file", ... to: reqEntity.addPart("file", new
FileBody(file)). Or use one of the other non-deprecated FileBody constructors to specify a valid content type and charset (API link) This assumes that your file method parameter is of type java.io.File -- this isn't clear to me from your snippet.
Then, as dmahapatro suggests, you should be able to read the file with: request.getFile('file')

Need a cq5 example

I am new to Adobe cq5. Went through many online blogs and tutorials but could not get much. Can any one provide a Adobe cq5 application example with detailed explanation that can store and retrieve data in JCR.
Thanks in advance.
Here's a snippet for CQ 5.4 to get you started. It inserts a content page and text (as a parsys) at an arbitrary position in the content hierarchy. The position is supplied by a workflow payload, but you could write something that runs from the command line and use any valid CRX path instead. The advantage of making it a process step is that you get a session established for you, and the navigation to the insert point has been taken care of.
import java.text.SimpleDateFormat;
import java.util.Date;
import javax.jcr.Node;
import javax.jcr.RepositoryException;
import org.apache.sling.jcr.resource.JcrResourceConstants;
import org.apache.felix.scr.annotations.Component;
import org.apache.felix.scr.annotations.Properties;
import org.apache.felix.scr.annotations.Property;
import org.apache.felix.scr.annotations.Service;
import org.osgi.framework.Constants;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import com.day.cq.workflow.WorkflowException;
import com.day.cq.workflow.WorkflowSession;
import com.day.cq.workflow.exec.WorkItem;
import com.day.cq.workflow.exec.WorkflowData;
import com.day.cq.workflow.exec.WorkflowProcess;
import com.day.cq.workflow.metadata.MetaDataMap;
import com.day.cq.wcm.api.NameConstants;
#Component
#Service
#Properties({
#Property(name = Constants.SERVICE_DESCRIPTION,
value = "Makes a new tree of nodes, subordinate to the payload node, from the content of a file."),
#Property(name = Constants.SERVICE_VENDOR, value = "Acme Coders, LLC"),
#Property(name = "process.label", value = "Make new nodes from file")})
public class PageNodesFromFile implements WorkflowProcess {
private static final Logger log = LoggerFactory.getLogger(PageNodesFromFile.class);
private static final String TYPE_JCR_PATH = "JCR_PATH";
* * *
public void execute(WorkItem workItem, WorkflowSession workflowSession, MetaDataMap args)
throws WorkflowException {
//get the payload
WorkflowData workflowData = workItem.getWorkflowData();
if (!workflowData.getPayloadType().equals(TYPE_JCR_PATH)) {
log.warn("unusable workflow payload type: " + workflowData.getPayloadType());
workflowSession.terminateWorkflow(workItem.getWorkflow());
return;
}
String payloadString = workflowData.getPayload().toString();
//the text to be inserted
String lipsum = "Lorem ipsum...";
//set up some node info
SimpleDateFormat simpleDateFormat = new SimpleDateFormat("d-MMM-yyyy-HH-mm-ss");
String newRootNodeName = "demo-page-" + simpleDateFormat.format(new Date());
SimpleDateFormat simpleDateFormatSpaces = new SimpleDateFormat("d MMM yyyy HH:mm:ss");
String newRootNodeTitle = "Demo page: " + simpleDateFormatSpaces.format(new Date());
//insert the nodes
try {
Node parentNode = (Node) workflowSession.getSession().getItem(payloadString);
Node pageNode = parentNode.addNode(newRootNodeName);
pageNode.setPrimaryType(NameConstants.NT_PAGE); //cq:Page
Node contentNode = pageNode.addNode(Node.JCR_CONTENT); //jcr:content
contentNode.setPrimaryType("cq:PageContent"); //or use MigrationConstants.TYPE_CQ_PAGE_CONTENT
//from com.day.cq.compat.migration
contentNode.setProperty(javax.jcr.Property.JCR_TITLE, newRootNodeTitle); //jcr:title
contentNode.setProperty(NameConstants.PN_TEMPLATE,
"/apps/geometrixx/templates/contentpage"); //cq:template
contentNode.setProperty(JcrResourceConstants.SLING_RESOURCE_TYPE_PROPERTY,
"geometrixx/components/contentpage"); //sling:resourceType
Node parsysNode = contentNode.addNode("par");
parsysNode.setProperty(JcrResourceConstants.SLING_RESOURCE_TYPE_PROPERTY,
"foundation/components/parsys");
Node textNode = parsysNode.addNode("text");
textNode.setProperty(JcrResourceConstants.SLING_RESOURCE_TYPE_PROPERTY,
"foundation/components/text");
textNode.setProperty("text", lipsum);
textNode.setProperty("textIsRich", true);
workflowSession.getSession().save();
}
catch (RepositoryException e) {
log.error(e.toString(), e);
workflowSession.terminateWorkflow(workItem.getWorkflow());
return;
}
}
}
I have posted further details and discussion.
A few other points:
I incorporated a timestamp into the name and title of the content
page to be inserted. That way, you can run many code and test cycles
without cleaning up your repository, and you know which test was the
most recently run. Added bonus: no duplicate file names, no
ambiguity.
Adobe and Day have been inconsistent about providing constants for
property values, node types, and suchlike. I used the constants that
I could find, and used literal strings elsewhere.
I did not fill in properties like the last-modified date. In code for
production I would do so.
I found myself confused by Node.setPrimaryType() and
Node.getPrimaryNodeType(). The two methods are only rough
complements; the setter takes a string but the getter returns a
NodeType with various info inside it.
In my original version of this code, I read the text to be inserted from a file, rather than just using the static string "Lorem ipsum..."
Once you've worked through this example, you should be able to use the Abobe docs to write code that reads data back from the CRX.
If you want to learn how to write a CQ application that can store and query data from the CQ JRC, see this article:
http://scottsdigitalcommunity.blogspot.ca/2013/02/querying-adobe-experience-manager-data.html
This provides a step by step guide and walks you right through the entire processes - including building the OSGi bundle using Maven.
FRom the comments above - I see reference to BND file. You should stay away from CRXDE to create OSGi and use Maven.

Scrapy - parsing all sub-pages of a given domain

I would like to parse kickstarter.com projects using scrapy, but can't figure out how to make the spider search projects that I don't explicitly specify under start_urls. I have the first part of the scrapy code figured out (I can extract the necessary information from one website), I just can't get it to do this for all projects under the domain kickstarter.com/projects.
From what I've read, I believe that parsing is possible (1) using links on the starting page (kickstarter.com/projects), (2) using links from one project page to jump to another project, and (3) using a site map (which I don't think kickstarter.com has) to locate webpages to parse.
I've spent hours trying each of these methods but and I am getting nowhere.
I've used the scrapy tutorial code and built on it.
Here is the part so far that works:
from scrapy import log
from scrapy.contrib.spiders import CrawlSpider
from scrapy.selector import HtmlXPathSelector
from tutorial.items import kickstarteritem
class kickstarter(CrawlSpider):
name = 'kickstarter'
allowed_domains = ['kickstarter.com']
start_urls = ["http://www.kickstarter.com/projects/brucegoldwell/dragon-keepers-book-iv-fantasy-mystery-magic"]
def parse(self, response):
x = HtmlXPathSelector(response)
item = kickstarteritem()
item['url'] = response.url
item['name'] = x.select("//div[#class='NS-project_-running_board']/h2[#id='title']/a/text()").extract()
item['launched'] = x.select("//li[#class='posted']/text()").extract()
item['ended'] = x.select("//li[#class='ends']/text()").extract()
item['backers'] = x.select("//span[#class='count']/data[#data-format='number']/#data-value").extract()
item['pledge'] = x.select("//div[#class='num']/#data-pledged").extract()
item['goal'] = x.select("//div[#class='num']/#data-goal").extract()
return item
Since you're subclassing CrawlSpider, do not override parse. CrawlSpider's link crawling logic is contained within parse, which you really need.
As for the crawling itself, that's what the rules class attribute is for. I haven't tested it, but it should work:
from scrapy.contrib.spiders import CrawlSpider
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.loader import XPathItemLoader
from scrapy.selector import HtmlXPathSelector
from tutorial.items import kickstarteritem
class kickstarter(CrawlSpider):
name = 'kickstarter'
allowed_domains = ['kickstarter.com']
start_urls = ['http://www.kickstarter.com/discover/recently-launched']
rules = (
Rule(
SgmlLinkExtractor(allow=r'\?page=\d+'),
follow=True
),
Rule(
SgmlLinkExtractor(allow=r'/projects/'),
callback='parse_item'
)
)
def parse_item(self, response):
xpath = HtmlXPathSelector(response)
loader = XPathItemLoader(item=kickstarteritem(), response=response)
loader.add_value('url', response.url)
loader.add_xpath('name', '//div[#class="NS-project_-running_board"]/h2[#id="title"]/a/text()')
loader.add_xpath('launched', '//li[#class="posted"]/text()')
loader.add_xpath('ended', '//li[#class="ends"]/text()')
loader.add_xpath('backers', '//span[#class="count"]/data[#data-format="number"]/#data-value')
loader.add_xpath('pledge', '//div[#class="num"]/#data-pledged')
loader.add_xpath('goal', '//div[#class="num"]/#data-goal')
yield loader.load_item()
The spider crawls the pages of the recently launched projects.
Also, use yield instead of return. It's better to keep your spider's output a generator and it lets you yield multiple items/requests without making a list to hold them.

Scala and html: download an image (*.jpg, etc) to Hard drive

I've got a Scala program that downloads and parses html. I got the links to the image files form the html, Now I need to transfer those images to my hard drive. I'm wondering what the best Scala method I should use.
my connection code:
import java.net._
import java.io._
import _root_.java.io.Reader
import org.xml.sax.InputSource
import scala.xml._
def parse(sUrl:String) = {
var url = new URL(sUrl)
var connect = url.openConnection
var sorce:InputSource = new InputSource
var neo = new TagSoupFactoryAdapter //load sUrl
var input = connect.getInputStream
sorce.setByteStream(input)
xml = neo.loadXML(sorce)
input.close
}
My blog
Then you may want to take a look at java2s. Although the solution is in plain Java but you can still modify to Scala syntax to "just use it"
An alternative option is to use the system commands which is much cleaner
import sys.process._
import java.net.URL
import java.io.File
object Downloader {
def start(location: String) : Unit = {
val url = new URL(location)
var path = url match {
case UrlyBurd(protocol, host, port, path) => (if (path == "") "/" else path)
}
path = path.substring(path.lastIndexOf("/") + 1)
url #> new File(path) !!
}
}
object UrlyBurd {
def unapply(in: java.net.URL) = Some((
in.getProtocol,
in.getHost,
in.getPort,
in.getPath
))
}
One way to achieve that is: collect the URLs of the images and ask for them to the server (open a new connection with the image url and store the bytestream in the hard drive)

Resources