I'm going to need to display a list of results as a response from a telegram bot I'm working on, and was wondering what's the best way of doing that...
I could "calculate" the amount of spaces I need to make it look semi-normal, but I'd rather have a better solution, if there is one.
Thanks!
So, the closest solution I could find is using "pre" inside the bot message (assuming it has parseMode=html), since the characters are all the same width.
I won't be using it eventually, but that's my 2 cents..
i prefer generating HTML table as file and then convert it to an image with some tools, and finally send the image to user by sendPhoto method.
by this method i can create a beautiful colored table with many options...
I have used a PHP library to generate a tabular structure, then used <pre> tag to keep the spacing as it is as suggested by trueicecold
Related
After trying Paw3 for a while, I found it's really amazing, but I have some little issue about the operation:
How can I bulk edit HTTP headers instead of editing in a table one by one?
How can I fold some of the JSON text code when response is too long?
When I search in the response, is there any way to show the number of the matches?
Many thanks.
Thanks for the kind words about Paw!
Unfortunately, none of the 3 things you've asked are already implemented.
How can I bulk edit HTTP headers instead of editing in a table one by one?
There's no ability to bulk-edit headers yet. Instead, we recommend users to use environment variables as reusable presets. We'd like to later add a batch-edit feature.
How can I fold some of the JSON text code when response is too long?
There's no way to fold JSON texts yet. You could use the regular JSON tree if you need to fold items. Same here, it's something we'd like to add to the text too.
When I search in the response, is there any way to show the number of the matches?
Not displayed. It would be easy to add though. I take note :)
we want to realize the following:
Generate PDF with Template,which means set value in AcroFields
Create a big details table (structure of table is also in template). In this progress, the details will occupy more than one page.
If the detail table is on multi pages, the header of the table should be also on top of the new page.
We found some examples on following website:
http://kuujinbo.info/cs/itext_template1.aspx
http://kuujinbo.info/cs/itext_template2.aspx
But the details the founction is omitted there.
Add content; the code for _do_form_fields(), _get_transaction_details(), and _transaction_summary() are omitted, since they only return strings to add to ColumnText. ColumnText is smart; each call to Go() renders as much text that will fit on the current page and returns a status code that tells you: (1) how much text (to write) is remaining, and/or (2) how much space is still available on the page. On each iteration you add text to the current page, call ColumnText.HasMoreText() to inspect the status, and then Document.NewPage() if necessary.
Is there anyone who had same situation before? We are appreciated that you could offer some tips or suggestions.
Thank you.
best regards,
Cheng Gong
You are already making a mistake in the first step of your requirements.
You say "Generate PDF with Template,which means set value in AcroFields."
The first part is OK: you want to generate a PDF with a template. However, this doesn't mean setting values in AcroFields. That's only one option. It's the option you take if you consider PDF being the digital equivalent of paper. The form is static: every coordinate is fixed. You just fill out data at the appropriate places. If the data doesn't fit the designated areas, you're out of luck. I already referred to chapter 6 of my book in a comment. You can also see how AcroForms work in a longer tutorial: https://www.youtube.com/watch?v=6YwDME0Fl1c (This tutorial is almost completely dedicated to creating a report from a data set.)
Another way to create PDF from a template is by using the XML Forms Architecture. In this case (if you have a pure XFA form), your PDF is a container for XML. You can then inject XML data into this form and the form will adapt itself depending on the data. A one-page form can easily grow into a 20-page document when filled out. This is explained in this video: https://www.youtube.com/watch?v=h0wzj84tnmw (Note that the video dates from 2012. The product I present has been finished and the results are much better now.)
Alternatives to this approach could be to create a template in HTML. I often refer to this solution as a poor man's XFA solution. This solution requires XML Worker. You can see an example in this video: https://www.youtube.com/watch?v=clWoDrEEl50
This is a general answer. I couldn't be more specific because your question isn't clear. You first need to make your mind up regarding the approach. Right now, you talk about AcroFields and at the same time about ColumnText. In the long tutorial, this is described as the hard way. See also the corresponding online samples. It is very confusing why you're asking a very difficult question before asking the simple questions. Unless of course, you already have the answer to those simple questions. If so, please share these answers.
I need some pointers on how to go about solving this problem:
I have more than 10K + simple HTML web pages which all have the same format. When I say "same format", I mean that they all will have the same h1 tag at the begining but with a varying text and followed by a table and then followed by a link, etc. So, if you see, the basic HTML skeleton of the 10K+ pages are the same but just that the text will keep varying.
I have a way to iterate through all those 10K pages. I however do not know how I can copy specific text in that page onto a XLS/CSV column-wise. Once I can achieve this I will import this excel sheet into MySQL and do further processing.
I know PHP to a certain extent. So, this is what I can think of:
$html = file_get_contents("http://www.SomeWebsite.com/");
I then can use some RegEx to manipulate the data I need. I however do not know how to handle redirects.
This is what I can think of but is there anything better? May be an existing tool or better scripting languages?
You may use HTQL to extract the html content. It has Python and COM interfaces. see: http://htql.net/
To extract the <h1> tag, simply use "<h1>" as the query.
You could do this with PHP, though I recommend XPath instead of regular expressions.
Personally I use Python with lxml and this webscraping library.
It could be a project well beyond my skills right now but I've got around one full month to spend on it so I think I can do it. What I want to build is this: Gather news about a specific subject from various sources. Easy, right? Just get the rss feeds and display them on a page. Well, I want something more advanced: Duplicates removed and customized presentation (that is, be able to define/change the format in which the news headlines are displayed).
I've played a bit with Yahoo Pipes and some other tools and I am facing two big problems:
Some sources don't provide rss feeds. How do I create one?
What's the best method to find and remove duplicates. I thought about comparing the headlines and checking if there is a matching bigger than, say, 50%. Is that a good practice though?
Please add any other things (problems, suggestions, whatever) I might not have considered.
Duplication is a nasty issue. What I eventually ended up doing:
1. Strip out all HTML tags except for links (Although I started using regex, I was burned. I eventually moved to custom parsing to remove tags)
2. Strip out all whitespace
3. Case-desensitize
4. Hash all that with MD5.
Here's why you leave the link in:
A comment might be as simple as "Yes, this sucks". "Yes, this sucks" could be a common comment. BUT if the text "this sucks" is linked to different things, then it is not a duplicate comment.
Additionally, you will find that HTML tag escaping is weird with RSS feeds. You would think that a stray < would be double-encoded: (I think)&<;
But it is not. It is encoded <
But so too are HTML tags! :<p>
I eventually copied all the known HTML tags as parsed by Mozilla Firefox and manually recognized those tags.
Creating an RSS feed from HTML is quite nasty and I can only point you to services such as Spinn3r, which are fantastic at de-duplication and content extraction. These services typically use probability-based algorithms that are above me. I know of one provider that got away with regexing pages (They had to know that a certain page was MySpace-based or Blogger-based) but they did not perform admirably.
You might want to try to use the YQL module to scrape a webpage that doesn't provide RSS. Here's a sample of a YQL statement to scrape HTML.
About duplicates, take a look at this pipe.
Customized presentation: if you want it truly customized you'll have to manipulate the pipe results yourself, e.g. get it as JSON an manipulate it with Javascript, or process it server-side.
I have no idea what I am doing, but I keep trying. I have been trying to find a way to add a dictionary search box to my school website for my 3rd grade (7-8 year olds). Most of the dictionary sites are too complex and riddled with inappropriate advertisements. I found out about google/dictionary.com the other day and have been trying to figure out how to create a custom search with it.
I asked for help here before and was able to get a script that passed a word to the dictionary and displayed the results in an Iframe. Which works ok but it is a full page and I can't change the size of the page in the Iframe.
I came across this
http://www.google.com/dictionary/json?callback=dict_api.callbacks.id100&q=school&sl=en&tl=en&restrict=pr%2Cde&client=te
Where "school" is the word that is looked up.
However I can't figure out how to style the results, any ideas?
I suggest you dont use this url(API) as it is against google policy. It violates the contract google has with its providers, google asked a developer who made a dictionary extension for chrome to stop using the api.
The result is coming back in JSON. You'll probably want something that can parse JSON, and then you can output the result in whatever form you like, based on the data from the result.
You need to be a little conversant with javascript since the results are sent back as a javascript object ie the result is sent back as json text which you need to parse to retrieve the contents. To parse the contents you can use the javascript eval() function.