Ok so basically I have the following code:
name=raw_input("What is your name?")
quest=raw_input("What is your quest?")
print ("As so your name is %s, your quest is %s ") %(name,quest)
This runs perfection in Python 2.7.9
I have tried to run this same exact code in Python 3.4.2 and it does't work (figured), so I modified it to this thinking it would work:
name=input("What is your name?")
quest=input("What is your quest?")
print ("As so your name is %s, your quest is %s ") %(name,quest)
And this:
name=input("What is your name?")
quest=input("What is your quest?")
print ("As so your name is {}, your quest is {} ") .format(name,quest)
And of course that didn't work either, I have searched for over an hour now multiple sites, what am I missing here? How do you do this in Python 3.4.2, all I keep getting is sites and answers showing you the first way (I listed), and all it does is work on the older version python 2.
Thanks
print is a function in Python 3. Thus, doing print(...).format(...) is effectively trying to format the return value of the print() call, which is None.
Call .format() on the string you want formatted instead:
print("As so your name is {}, your quest is {} ".format(name,quest))
Your modified code was nearly right, you just needed to move a bracket to apply the % operator to the string instead of the print function result.
So change this:
print ("As so your name is %s, your quest is %s ") % (name, quest)
to this:
print ("As so your name is %s, your quest is %s " % (name, quest))
and it runs fine in Python 3.
Related
The input looks like:
9999993612,10/Feb/2016:19:04:16
9999993612,10/Feb/2016:19:04:15
9999993612,10/Feb/2016:19:04:09
9999993612,10/Feb/2016:01:31:47
9999993612,10/Feb/2016:01:31:46
9999993612,10/Feb/2016:01:31:43
We need output.
9999993612,10/Feb/2016:19:04:16
using linux command.
**** It's working fine**********
awk 'BEGIN{FS=","; print "NO time"} NR!=1 {a[$1]++;b[$1]=$2}END{for (i in a) printf("%s %s \n" ,i, b[i])} ' FILENAME
I'm trying to write a .p script that will export a table from a database as a csv. The following code creates the csv:
OUTPUT TO VALUE ("C:\Users\Admin\Desktop\test.csv").
FOR EACH table-name NO-LOCK:
EXPORT DELIMITER "," table-name.
END.
OUTPUT CLOSE.
QUIT.
However, I can't figure how to encapsulate all of the fields with double quotes. Nor can I figure out how to get the first row of the .csv to have the column names of the table. How would one go about doing this?
I'm very new to Progress / 4GL. Originally I was using R and an ODBC connection to import and format the table before saving it as a csv. But I've learned that the ODBC driver I'm using does not work reliably...sometimes it will not return all the rows in the table.
The ultimate goal is to pass an argument (table-name) to a .p script that will export the table as a csv. Then I can import the csv in R, manipulate / format the data and then export the table again as a csv.
Any advice would be greatly appreciated.
EDIT:
The version of Progress I am using is 9.1D
Using the above code, the output might look like this...
"ACME",01,"Some note that may contain carriage returns.\n More text",yes,"01A"
The reason for trying to encapsulate every field with double quotes is because some fields may contain carriage returns or other special characters. R doesn't always like carriage return in the middle of field. So the desired output would be...
"ACME","01","Some note that may contain carriage returns.\n More text","yes","01A"
Progress version is important to know. Your ODBC issue is likely caused by the fact that formats in Progress are default display formats and don't actually limit the amount of data to be stored. Which of course drives SQL mad.
You can use this KB to learn about the DBTool utility to fix the SQL width http://knowledgebase.progress.com/articles/Article/P24496
As far as the export is concerned what you are doing will already take care of the double quotes for character columns. You have a few options to solve your header issue depending on your version of Progress. This one will work no matter your version but is not as elegant as the newer options....
Basically copy this into the procedure editor and it will generate a program with internal procedures for each table in your DB. Run the csvdump.p by passing in the table name and the csv file you want( run csvdump.p ("mytable","myfile").
Disclaimer you may run into some odd datatypes that can't be exported like RAW but they aren't very common.
DEF VAR i AS INTEGER NO-UNDO.
OUTPUT TO csvdump.p.
PUT UNFORMATTED
"define input parameter ipTable as character no-undo." SKIP
"define input parameter ipFile as character no-undo." SKIP(1)
"OUTPUT TO VALUE(ipFile)." SKIP(1)
"RUN VALUE('ip_' + ipTable)." SKIP(1)
"OUTPUT CLOSE." SKIP(1).
FOR EACH _file WHERE _file._tbl-type = "T" NO-LOCK:
PUT UNFORMATTED "PROCEDURE ip_" _file._file-name ":" SKIP(1)
"EXPORT DELIMITER "~",~"" SKIP.
FOR EACH _field OF _File NO-LOCK BY _Field._Order:
IF _Field._Extent = 0 THEN
PUT UNFORMATTED "~"" _Field-Name "~"" SKIP.
ELSE DO i = 1 TO _Field._Extent:
PUT UNFORMATTED "~"" _Field-Name STRING(i,"999") "~"" SKIP.
END.
END.
PUT UNFORMATTED "." SKIP(1)
"FOR EACH " _File._File-name " NO-LOCK:" SKIP
" EXPORT DELIMITER "~",~" " _File._File-Name "." SKIP
"END." SKIP(1).
PUT UNFORMATTED "END PROCEDURE." SKIP(1).
END.
OUTPUT CLOSE.
BIG Disclaimer.... I don't have 9.1D to test with since it is well past the supported date.... I believe all of this will work though.
There are other ways to do this even in 9.1D (dynamic queries) but this will probably be easier for you to modify if needed since you are new to Progress. Plus it is likely to perform better than purely dynamic exports. You can keep nesting the REPLACE functions to get rid of more and more characters... or just copy the replace line and let it run it over and over if needed.
DEF VAR i AS INTEGER NO-UNDO.
FUNCTION fn_Export RETURNS CHARACTER (INPUT ipExtent AS INTEGER):
IF _Field._Data-Type = "CHARACTER" THEN
PUT UNFORMATTED "fn_Trim(".
PUT UNFORMATTED _File._File-Name "." _Field._Field-Name.
IF ipExtent > 0 THEN
PUT UNFORMATTED "[" STRING(ipExtent) "]" SKIP.
IF _Field._Data-Type = "CHARACTER" THEN
PUT UNFORMATTED ")".
PUT UNFORMATTED SKIP.
END.
OUTPUT TO c:\temp\wks.p.
PUT UNFORMATTED
"define input parameter ipTable as character no-undo." SKIP
"define input parameter ipFile as character no-undo." SKIP(1)
"function fn_Trim returns character (input ipChar as character):" SKIP
" define variable cTemp as character no-undo." SKIP(1)
" if ipChar = '' or ipChar = ? then return ipChar." SKIP(1)
" cTemp = replace(replace(ipChar,CHR(13),''),CHR(11),'')." SKIP(1)
" return cTemp." SKIP(1)
"end." SKIP(1)
"OUTPUT TO VALUE(ipFile)." SKIP(1)
"RUN VALUE('ip_' + ipTable)." SKIP(1)
"OUTPUT CLOSE." SKIP(1).
FOR EACH _file WHERE _file._tbl-type = "T" NO-LOCK:
PUT UNFORMATTED "PROCEDURE ip_" _file._file-name ":" SKIP(1)
"EXPORT DELIMITER "~",~"" SKIP.
FOR EACH _field OF _File NO-LOCK BY _Field._Order:
IF _Field._Extent = 0 THEN
PUT UNFORMATTED "~"" _Field-Name "~"" SKIP.
ELSE DO i = 1 TO _Field._Extent:
PUT UNFORMATTED "~"" _Field-Name STRING(i) "~"" SKIP.
END.
END.
PUT UNFORMATTED "." SKIP(1)
"FOR EACH " _File._File-name " NO-LOCK:" SKIP.
PUT UNFORMATTED "EXPORT DELIMITER ~",~"" SKIP.
FOR EACH _field OF _File NO-LOCK BY _Field._Order:
IF _Field._Extent = 0 OR _Field._Extent = ? THEN
fn_Export(0).
ELSE DO i = 1 TO _Field._Extent:
fn_Export(i).
END.
END.
PUT UNFORMATTED "." SKIP(1)
"END." SKIP(1).
PUT UNFORMATTED "END PROCEDURE." SKIP(1).
END.
OUTPUT CLOSE.
I beg to differ on one small point with #TheMadDBA. using EXPORT will not deal with quoting all the fields in your output in CSV style. Logical fields, for example, will not be quoted.
'CSV Format' is the vaguest of standards, but the export command does not conform with it. It was not designed for that. (I notice that in #TheMadDBA's final example, they do not use export, either.)
If you want all the non-numeric fields quoted, you need to handle this yourself.
def stream s.
output stream s to value(v-filename).
for each tablename no-lock:
put stream s unformatted
'"' tablename.charfield1 '"'
',' string(tablename.numfield)
',"' tablename.charfield2 '"'
skip.
end.
output stream s close.
In this example I'm assuming that you are okay with coding a specific dump for a single table, rather than a generic solution. You can certainly do the latter with meta-programming as in #TheMadDBA's answer, with ABL's dynamic query syntax, or even with -- may the gods forgive us both -- include files. But that's a more advanced topic, and you said you were just starting with ABL.
You will still have to deal with string truncation as per #TheMadDBAs answer.
After some inspiration from #TheMadDBAs and additional thought here is my solution to the problem...
I decided to write a script in R that would generate the p scripts. The R script uses one input, the table name and dumps out the p script.
Below is a sample p script...
DEFINE VAR columnNames AS CHARACTER.
columnNames = """" + "Company" + """" + "|" + """" + "ABCCode" + """" + "|" + """" + "MinDollarVolume" + """" + "|" + """" + "MinUnitCost" + """" + "|" + """" + "CountFreq" + """".
/* Define the temp-table */
DEFINE TEMP-TABLE tempTable
FIELD tCompany AS CHARACTER
FIELD tABCCode AS CHARACTER
FIELD tMinDollarVolume AS CHARACTER
FIELD tMinUnitCost AS CHARACTER
FIELD tCountFreq AS CHARACTER.
FOR EACH ABCCode NO-LOCK:
CREATE tempTable.
tempTable.tCompany = STRING(Company).
tempTable.tABCCode = STRING(ABCCode).
tempTable.tMinDollarVolume = STRING(MinDollarVolume).
tempTable.tMinUnitCost = STRING(MinUnitCost).
tempTable.tCountFreq = STRING(CountFreq).
END.
OUTPUT TO VALUE ("C:\Users\Admin\Desktop\ABCCode.csv").
/* Output the column names */
PUT UNFORMATTED columnNames.
PUT UNFORMATTED "" SKIP.
/* Output the temp-table */
FOR EACH tempTable NO-LOCK:
EXPORT DELIMITER "|" tempTable.
END.
OUTPUT CLOSE.
QUIT.
/* Done */
The R script makes an ODBC call to the DB to get the column names for the table of interest and then populates the template to generate the p script.
I'm not sure creating a temp table and casting everything as a character is the best way of solving the problem, but...
we have column names
everything is encapsulated in double quotes
and we can choose any delimiter (e.g. "|" instead of ",")
i'm confused about the $symbol for unix.
according to the definition, it states that it is the value stored by the variable following it. i'm not following the definition - could you please give me an example of how it is being used?
thanks
You define a variable like this:
greeting=hello
export name=luc
and use like this:
echo $greeting $name
If you use export that means the variable will be visible to subshells.
EDIT: If you want to assign a string containing spaces, you have to quote it either using double quotes (") or single quotes ('). Variables inside double quotes will be expanded whereas in single quotes they won't:
axel#loro:~$ name=luc
axel#loro:~$ echo "hello $name"
hello luc
axel#loro:~$ echo 'hello $name'
hello $name
In case of shell sctipts. When you assign a value to a variable you does not need to use $ simbol. Only if you want to acces the value of that variable.
Examples:
VARIABLE=100000;
echo "$VARIABLE";
othervariable=$VARIABLE+10;
echo $othervariable;
The other thing: if you use assignment , does not leave spaces before and after the = simbol.
Here is a good bash tutorial:
http://linuxconfig.org/Bash_scripting_Tutorial
mynameis.sh:
#!/bin/sh
finger | grep "`whoami` " | tail -n 1 | awk '{FS="\t";print $2,$3;}'
finger: prints all logged in user example result:
login Name Tty Idle Login Time Office Office Phone
xuser Forname Nickname tty7 3:18 Mar 9 07:23 (:0)
...
grep: filter lines what containing the given string (in this example we need to filter xuser if our loginname is xuser)
http://www.gnu.org/software/grep/manual/grep.html
whoami: prints my loginname
http://linux.about.com/library/cmd/blcmdl1_whoami.htm
tail -n 1 : shows only the last line of results
http://unixhelp.ed.ac.uk/CGI/man-cgi?tail
the awk script: prints the second and third column of the result: Forname, Nickname
http://www.staff.science.uu.nl/~oostr102/docs/nawk/nawk_toc.html
I'm trying to write code that appends ending _my_ending to the filename, and does not change file extension.
Examples of what I need to get:
"test.bmp" -> "test_my_ending.bmp"
"test.foo.bar.bmp" -> "test.foo.bar_my_ending.bmp"
"test" -> "test_my_ending"
I have some experience in PCRE, and that's trivial task using it. Because of the lack of experience in Qt, initially I wrote the following code:
QString new_string = old_string.replace(
QRegExp("^(.+?)(\\.[^.]+)?$"),
"\\1_my_ending\\2"
);
This code does not work (no match at all), and then I found in the docs that
Non-greedy matching cannot be applied to individual quantifiers, but can be applied to all the quantifiers in the pattern
As you see, in my regexp I tried to reduce greediness of the first quantifier + by adding ? after it. This isn't supported in QRegExp.
This is really disappointing for me, and so, I have to write the following ugly but working code:
//-- write regexp that matches only filenames with extension
QRegExp r = QRegExp("^(.+)(\\.[^.]+)$");
r.setMinimal(true);
QString new_string;
if (old_string.contains(r)){
//-- filename contains extension, so, insert ending just before it
new_string = old_string.replace(r, "\\1_my_ending\\2");
} else {
//-- filename does not contain extension, so, just append ending
new_string = old_string + time_add;
}
But is there some better solution? I like Qt, but some things that I see in it seem to be discouraging.
How about using QFileInfo? This is shorter than your 'ugly' code:
QFileInfo fi(old_string);
QString new_string = fi.completeBaseName() + "_my_ending"
+ (fi.suffix().isEmpty() ? "" : ".") + fi.suffix();
Hi I want to delete a line from a file which matches particular pattern
the code I am using is
BEGIN {
FS = "!";
stopDate = "date +%Y%m%d%H%M%S";
deletedLineCtr = 0; #diagnostics counter, unused at this time
}
{
if( $7 < stopDate )
{
deletedLineCtr++;
}
else
print $0
}
The code says that the file has lines "!" separated and 7th field is a date yyyymmddhhmmss format. The script deletes a line whose date is less than the system date. But this doesn't work. Can any one tell me the reason?
Is the awk(1) assignment due Tuesday? Really, awk?? :-)
Ok, I wasn't sure exactly what you were after so I made some guesses. This awk program gets the current time of day and then removes every line in the file less than that. I left one debug print in.
BEGIN {
FS = "!"
stopDate = strftime("%Y%m%d%H%M%S")
print "now: ", stopDate
}
{ if ($7 >= stopDate) print $0 }
$ cat t2.data
!!!!!!20080914233848
!!!!!!20090914233848
!!!!!!20100914233848
$ awk -f t2.awk < t2.data
now: 20090914234342
!!!!!!20100914233848
$
call date first to pass the formatted date as a parameter:
awk -F'!' -v stopdate=$( date +%Y%m%d%H%M%S ) '
$7 < stopdate { deletedLineCtr++; next }
{print}
END {do something with deletedLineCrt...}
'
You would probably need to run the date command - maybe with backticks - to get the date into stopDate. If you printed stopDate with the code as written, it would contain "date +...", not a string of digits. That is the root cause of your problem.
Unfortunately...
I cannot find any evidence that backticks work in any version of awk (old awk, new awk, GNU awk). So, you either need to migrate the code to Perl (Perl was originally designed as an 'awk-killer' - and still includes a2p to convert awk scripts to Perl), or you need to reconsider how the date is set.
Seeing #DigitalRoss's answer, the strftime() function in gawk provides you with the formatting you want (check 'info gawk' as I did).
With that fixed, you should be getting the right lines deleted.