Can I copy all tables from a URL? - http

My task is to copy tables from a public domain and format it later in Word. I have created a software where I just have to enter two values and the table is displayed to me on a web page. Then I have to copy this table into Word.
I was wondering if there was an easier way to achieve this....
I would even like to know if it is possible to store all the values I type to a TXT file or Excel sheet and programmatically copy the displayed web pages to Word.
Please help me and don't down-vote.....
Okay here are the detailed steps:
Open a webpage
Fill in a form with 4 fields
A new webpage opens based on what input you provide
Copy 2 tables from that webpage
Paste the 2 tables in MS Word 2007
Open browser again and go back to previous page
Enter new values in the webpage
Repeat all the steps
P.S There are more than 700 tables to be copied each week

I'm not sure this is what you need...anyway...
If you download the page (programmatically of course) you can parse it as XML (I assume it's a well-formed XML file otherwise you may have to use some dirty trick to find all tables). Then you can put all data on Word (by automation, you can even do all these stuffs from a Word macro, just download the HTML file, "parse" it to find tables and paste that text as HTML).
I would provide some example but it can't really be language-agnostic.

Related

Extracting table from a webpage in automation anywhere

Is there a way to extract a table from a web page in Automation Anywhere after taking certain steps using web recorder. The table does not appear directly, it appears after clicking few controls after launching the URL.
The table that I want to extract is coming after loggin in to the website and filtering using a control for search criteria.
I used web recorder to login and putting the desired search criteria in a text field and I want to extract the table now. When I use web recorder, it launches the URL again and takes me back to the login page which I dont want. I want the bot to stay on the page. Pls help.
Also, what is the significance of session name of an extracted table?
If you clicked on Advanced View, you will find at Step 5 : to run this command using an existing IE window. Try to write the URL of the page with the table and not the one of the login page.
The extracted table is to be used using variable $Table Column(Index)$ with index being the column number or column name
you can export directly using object cloning and in the selection criteria export to csv file. But we need to click on html inner text also in search criteria
An old question, but my experience has been the Extract Data/Table commands are rather poor. Not only do they only work in IE, you cannot call them as commands, they have to be called via a web recording.
Instead, I've found it much more useful to object clone the initial element, grab the DOMXPath, and variablize that. Then throw it into a loop while command and set the condition on finding at least one element (of the elements for the table you are trying to build). You can grab all sorts of useful info in the object clone command and then right that to a variable/table.
For example
//div[#id='updatable-standings']/div[1]/div[1]/div[2]/div[1]/table[1]/tbody[1]/tr[3]/td[2]/div[1]/span[2]
//div[#id='updatable-standings']/div[1]/div[1]/div[2]/div[1]/table[1]/tbody[1]/tr[4]/td[2]/div[1]/span[2]
I can create a incremental variable for {tr[3]} and call it $vTeamLoop$ and change my DOMXPath value in the Object Clone to be
//div[#id='updatable-standings']/div[1]/div[1]/div[2]/div[1]/table[1]/tbody[1]/tr[$vTeamLoop$]/td[2]/div[1]/span[2]
Ultimately, it is more steps than the Data/Table Extract command, but it is far less limited in scope.
Hope that helps.
enter code here

How to get all rows from Page list and convert them to CSV utilizing pxConvertResultsToCSV

I have a Repeat grid layout, as a source is Report definition. The grid displays twenty row per page. So, if there are thirty-three rows, there are four pages.
I have got a task to export all grid's data to CSV. I have found out the pxConvertResultsToCSV activity. It requires to pass PageList with the properties to convert. I use pgRepPgSubSectionMySectionListB.pxResults to do this. But I have realized, that the property pxResults contains only first twenty elements of pgRepPgSubSectionMySectionListB. But I must export to CSV all the rows. How can I reach this? Thank you.
First run your report by calling pxRetrieveReportData activity of class Rule-Obj-Report-Definition in you acticity
Syntex:- call Rule-Obj-Report-Definition.pxRetrieveReportData
It will ask for parameters:-
pyReportName :- your report definition name
pyReportClass:- class of the report definition
pyPageName :- any page name for example ReportListExport. This page must be defined in Pages & Classes of class Code-Pega-List
After successful execution of this step, you will get ReportListExport.pxResults in Clipboard.
Now use this pxResults for export.
There is one more activity to export your Report in excel.
Call pzViewExportToExcel activity after running your report. And keep ReportListExport.pyReportDefinition as step page of this step.
This is preferred one.
This question is a bit old now so I'm sure the OP has probably solved the problem and moved on at this point. But for future viewers there is an easier way to solve this.
Pega includes a gadget called the "Record Editor" which can be used to display a report definition as an editable data table. It shows the provided report definition in a simple table as normal but users can also edit the rows, delete the rows and add new ones. It also includes import and export actions at the top so users can drop the entire resultset being shown in the table to CSV and then re-import changes back in after editing. You can find more information on this gadget and how to use it in this community article
If you simply want to provide an option at the top of a table sourced from a report definition that allows users to export the results as CSV without using the record editor gadget there is an API for that as well. The activity "pxDownloadDataRecordsAsCSV" in class "PegaAccel-Task-DataTableEditor" does this. It accepts the class and name of a report definition as parameters, runs that report and serves up the contents as a CSV file.
The second part here isn't too different from AJ's solution it's just an already existing parameterized activity you can use instead of writing one yourself.

Download files without knowing the extensions

I had an excel file which had for each row a column named id with a link value that open up other files the id doesn't have .(extension) on it.
The problem is that I imported the excel file on SQL Server Management, and the links is now strings without extension i want it to be displayed as a link in my grid view on asp.net page to open up the named file...
To mention there is at least 700 rows on the database.
Thanks in advance!

How to programmatically link Field in Enterprise Portal AX 2009 to open a specifical File

I'm a new beginner in Microsoft AX. I have a problem in AX 2009. I create a Table ImportFile with 2 Fields,("FileName-->Typ:String", "FileDocuValue-->Typ:Container"). When the User import a CSV-File it will be save in the Table ImportFile.
Now in EP I Just show in My GridView a Column FileName and I want that FileName be a Link so that when I click in one one these Names that it Open the Corresponding CSV-File in Excel.
Is it possible to do it?
I can suggest the following approach:
You can use a standard asp LinkButton control to display a link. When the link has been clicked,
Create a temporary CSV file from the data in your ImportFile table.
Create a URL to the generated CSV file (you can use the WebLink.url method).
Open the generated URL.
When I was working on a similar task (generate and download PDF) I also had to modify some standard classes such as WebSession and EPDocuGetWebLet.

Read a CSV file that have indefinite number of columns every time and create a table based on column names in csv file

I have a requirement to load the csv into DB using oracle apex or pl/sql code, but the problem is they are asking to load the csv file which will not come with same number of columns and column names .
I should create table & upload data dynamically based on the file name and data that i'm uploading.
For every file i need to create a new table dynamically and insert data that are present in csv file.
For Example:
File1:
col1 col2 col3 col4 (NOTE: If i upload File 1, Table should be created dynamically based on the file name and table should contain same column name and data same as column headers of csv file . )
file 2:
col1 col2 col3 col4 col 5
file 3:
col4 col2 col1 col3
Depending on the columns and file name i need to create table for every file upload.
Can we load like this or not?
If yes, Please help me on this.
Regards,
Sachin.
((Where's the PL/SQL code in this solution!!??! Bear with me... the
answer is buried in here somewhere... I introduced some considerations
and assumptions you will need to think about before going into the
task. In the end, you'll find that Oracle APEX actually has a
built-in solution that satisfies exactly what you've specified... with
some caveats.))
If you are working within the Oracle APEX platform, you will have some advantages. APEX Version 4.2 and higher has a new page element called "Data Loading". The disadvantage however is that the definition of the upload target is fixed and not dynamic. You will need to know how your table is structured prior to loading the data.
One approach to overcome this is to build a generic, two-column table as your target, which will serve for all uploads. Column 1 will be your file-name and column two will be a single clob data type, which will contain the entire data file's contents including the header row. The "Data Loading" element will give the user the opportunity to verify and select this mapping convention in a couple of clicks.
At this point, it's mostly PL/SQL backend work doing the heavy lifting to parse and transform the data uploaded. As far as the dynamic table creation, I have noticed that the Oracle package, DBMS_SQL allows the execution of DDL SQL commands, which could be the route to making custom tables.
Alex Poole's comment is important as well, you will need to make some blanket assumption about the data type or have a provision to give more clues about what kind of data is contained. Assuming you can rely on a sample of existing data values is not good... what if all the values in your upload are null? I recommend perhaps a second column in the data input with a clue about the type of data for each column... just like the intended header names, maybe: AAAAA = for a five character column, # = for a numeric, MM/DD/YYYY = for a date with a specific masking.
The easier route:
You will need to allow your end-user access to a developer-role account on a workspace of your APEX server. It is not as scary as you think. With careful instruction and some simple precautions, I have been able to make this work with even the most non-technical of users. The reason for this is that there is a more powerful upload tool found under the following menu item:
SQL Workshop --> Utilities --> Data Workshop
There is a choice under "Data Load" --> "Spreadsheet Data"
The data load tool will automatically do the following:
Accept a CSV formatted file through a browse function on your client machine
Upload the file and parse the first record for the column layout (names)
Allow the user to create a new table from the uploaded file, or to map to an existing one.
For new tables, each column data type can be declared and also a specific numeric/date mask if additional conversion from the uploaded data is necessary.
Delimiter type, optional enclosures (like double quotes), decimal conventions and currency types can also be declared prior to parsing the uploaded file.
Once the user has identified all these mappings and settings, the table is created with the uploaded data. Any errors in record upload are reported immediately afterwards with detailed feedback on the failed records.
A security consideration to note:
You probably do not want to give end users access to your APEX server's backend... but you CAN create a new workspace... just for your end users... create a new database schema for receiving their uploads, maybe with some careful resource controls. Developer is the minimum role needed... but even if the end users see the other stuff there won't be access to anything important from an isolated workspace.
I have implemented the isolated workspace approach on a 4.0/4.1 release APEX platform a few years back, and it worked nicely. Our end user had control over the staging and quality checking of her data inputs (from excel spreadsheet/csv exports collected from a combination of sources). I suppose it may have been even better to cut her out of the picture entirely and focused on automating the export-review-upload process between our database and her other sources. In this case, the volume of data involved was not great enough (100's to 1000's of records) and the need for manual review and edit of the exported data was very important prior to pushing it into the database... so the human element was still important in this case - it is something you'll want to think about now.

Resources