Related
I have to deal with translation in po/mo format.
Basically an example of my po:
msgid "content_book"
msgid_plural "content_books"
msgstr[0] "%s book"
msgstr[1] "%s books"
It seems that %s or %d placeholders are quite common.
But the Symfony component use the placeholder %count%
https://github.com/symfony/translation-contracts/blob/main/TranslatorTrait.php#L50
Is there any possibility to use Symnfony/Translation component with po/mo file without to convert them in the icu format (https://symfony.com/doc/4.4/translation/message_format.html)?
I have to use sf 4.4 for now (wait for the next lts in 5.x).
Thanks for the help.
Gettext is not meant to be used in the way you are attempting to use it. msgid (and msgid_plural) are supposed to contain the actual human-readable text in English (or whatever your base language is), not some symbol names.
PO files with only symbol names as msgids are also painful to translate, and involve much more manual work than proper PO files do.
I need to generate a file for Excel, some of the values in this file contain multiple lines.
there's also non-English text in there, so the file has to be Unicode.
The file I'm generating now looks like this: (in UTF8, with non English text mixed in and with a lot of lines)
Header1,Header2,Header3
Value1,Value2,"Value3 Line1
Value3 Line2"
Note the multi-line value is enclosed in double quotes, with a normal everyday newline in it.
According to what I found on the web this supposed to work, but it doesn't, at least not win Excel 2007 and UTF8 files, Excel treats the 3rd line as the second row of data not as the second line of the first data row.
This has to run on my customer's machines and I have no control over their version of Excel, so I need a solution that will work with Excel 2000 and later.
Thanks
EDIT: I "solved" my problem by having two CSV options, one for Excel (Unicode, tab separated, no newlines in fields) and one for the rest of the world (UTF8, standard CSV).
Not what I was looking for but at least it works (so far)
You should have space characters at the start of fields ONLY where the space characters are part of the data. Excel will not strip off leading spaces. You will get unwanted spaces in your headings and data fields. Worse, the " that should be "protecting" that line-break in the third column will be ignored because it is not at the start of the field.
If you have non-ASCII characters (encoded in UTF-8) in the file, you should have a UTF-8 BOM (3 bytes, hex EF BB BF) at the start of the file. Otherwise Excel will interpret the data according to your locale's default encoding (e.g. cp1252) instead of utf-8, and your non-ASCII characters will be trashed.
Following comments apply to Excel 2003, 2007 and 2013; not tested on Excel 2000
If you open the file by double-clicking on its name in Windows Explorer, everything works OK.
If you open it from within Excel, the results vary:
You have only ASCII characters in the file (and no BOM): works.
You have non-ASCII characters (encoded in UTF-8) in the file, with a UTF-8 BOM at the start: it recognises that your data is encoded in UTF-8 but it ignores the csv extension and drops you into the Text Import not-a-Wizard, unfortunately with the result that you get the line-break problem.
Options include:
Train the users not to open the files from within Excel :-(
Consider writing an XLS file directly ... there are packages/libraries available for doing that in Python/Perl/PHP/.NET/etc
After lots of tweaking, here's a configuration that works generating files on Linux, reading on Windows+Excel, though the embedded newline format is not according to the standard:
Newlines within a field need to be \n (and obviously quoted in double quotes)
End of record: \r\n
Make sure that you don't start a field with equals, otherwise it gets treated as a formula and truncated
In Perl, I used Text::CSV to do this as follows:
use Text::CSV;
open my $FO, ">:encoding(utf8)", $filename or die "Cannot create $filename: $!";
my $csv = Text::CSV->new({ binary => 1, eol => "\r\n" });
#for each row...:
$csv -> print ($FO, \#row);
Recently I had similar problem, I solved it by importing a HTML file, the baseline example would be like this:
<html xmlns:v="urn:schemas-microsoft-com:vml"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns="http://www.w3.org/TR/REC-html40">
<head>
<style>
<!--
br {mso-data-placement:same-cell;}
-->
</style>
</head>
<body>
<table>
<tr>
<td>first line<br/>second line</td>
<td style="white-space:normal">first line<br/>second line</td>
</tr>
</table>
</body>
</html>
I know, it is not a CSV, and might work differently for various versions of Excel, but I think it is worth a try.
I hope this helps ;-)
In Excel 365 while importing the file:
Data -> From Text/CSV:
-> Select File > Transform Data:
In the Power Query Editor, right hand side at "Query Settings", under APPLIED STEPS, on "Source" row, click the "Settings icon"
-> In the line break dropdown select Ignore line breaks inside quotes.
Then press OK -> File -> Close & Load
It is worth noting that when a .CSV file has fields wrapped in double quotes which contain line breaks, Excel will not import the .CSV file properly if the .CSV file is written in UTF-8 format. Excel treats the line break as if it were CR/LF and begins a new line. The spreadsheet is garbled. That seems to be true even if semi-colons are used as field delimiters (instead of commas).
The problem can be resolved by using Windows Notepad to edit the .CSV file, using File > Save As... to save the file, and before saving the file, changing the file encoding from UTF-8 to ANSI. Once the file is saved in ANSI format, then I find that Microsoft Excel 2013 running on Windows 7 Professional will import the file properly.
Newline inside a value seems to work if you use semicolon as separator, instead of comma or tab, and use quotes.
This works for me in both Excel 2010 and Excel 2000. However, surprisingly, it works only when you open the file as a new spreadsheet, not when you import it into an existing spreadsheet using the data import feature.
On a PC, ASCII character #10 is what you want to place a newline within a value.
Once you get it into Excel, however, you need to make sure word wrap is turned on for the multi-line cells or the newline will appear as a square box.
This will not work if you try to import the file into EXCEL.
Associate the file extension csv with EXCEL.EXE so you will be able to invoke EXCEL by double-clicking the csv file.
Here I place some text followed by the NewLine Char followed by some more text AND enclosing the whole string with double quotes.
Do not use a CR since EXCEL will place part of the string in the next cell.
""text" + NL + "text""
When you invoke EXCEL, you will see this. You may have to auto size the height to see it all. Where the line breaks will depend on the width of the cell.
2
DATE
Here's the code in Basic
CHR$(34,"2", 10,"DATE", 34)
I found this and it has worked for me
$delimiter = ',';
$enc1 = '"';
$enc2 = '""';
Then where you need to have stuff enclosed
$myfile = ('/path/to/myfile.csv');
//erase any previous contents
$fp = fopen($myfile, 'w+');
fwrite($fp, $enc1 . 'Column Heading 1' . $enc1 . $delimiter );
//append to new file
$fp2 = fopen($myfile, 'a');
fwrite($fp2, $enc1 . 'Column Heading 2' . $enc1 . $delimiter );
.....
fwrite($fp2, $enc1 . 'Last Column Heading' . $enc1 . $delimiter. PHP_EOL );
Then when you need to write something out - like HTML that includes the " you can do this
fwrite($fp2, $enc2 . $myhtmlstring . $enc2 . $delimiter);
New lines end with . PHP_EOL
The end of the script prints out a link so that the user can download the file.
echo 'Click here to download file';
Test this:
It fully works for me:
Put the following lines in a xxxx.csv file
hola_x,="este es mi text1"&CHAR(10)&"I sigo escribiendo",hola_a
hola_y,="este es mi text2"&CHAR(10)&"I sigo escribiendo",hola_b
hola_z,="este es mi text3"&CHAR(10)&"I sigo escribiendo",hola_c
Open with excel.
in some cases will open directly otherwise will need to use column to data conversion.
expand the column width and hit the wrap text button. or format cells and activate wrap text.
and thanks for the other suggestions, but they did not work for me. I am in a pure windows env, and did not want to play with unicode or other funny thing.
This way you putting a formula from csv to excel. It may be many uses for this method of work.
(note the = before the quotes)
pd:In your suggestions please put some samples of the data not only the code.
UTF files that contain a BOM will cause Excel to treat new lines literally even in that field is surrounded by quotes. (Tested Excel 2008 Mac)
The solution is to make any new lines a carriage return (CHR 13) rather than a line feed.
putting "\r" at the end of each row actually had the effect of line breaks in excel, but in the .csv it vanished and left an ugly mess where each row was squashed against the next with no space and no line-breaks
For File Open only, the syntax is
,"one\n
two",...
The critical thing is that there is no space after the first ",". Normally spaces are fine, and trimmed if the string is not quoted. But otherwise nasty. Took me a while to figure that out.
It does not seem to matter if the line is ended \n or \c\n.
Make sure you expand the formula bar so you can actually see the text in the cell (got me after a long day...)
Now of course, File Open will not support UTF-8 Properly (unless one uses tricks).
Excel > Data > Get External Data > From Text
Can be set into UTF-8 mode (it is way down the list of fonts). However, in that case the new lines do not seem to work and I know no way to fix that.
(One might thing that after 30 years MS would get this stuff right.)
The way we do it (we use VB.Net) is to enclose the text with new lines in Chr(34) which is the char representing the double quotes and replace all CR-LF characters for LF.
Normally a new line is "\r\n". In my CSV, I replaced "\r" with empty value.
Here is code in Javascript:
cellValue = cellValue.replace(/\r/g, "")
When I open the CSV in MS Excel, it worked well. If a value has multiple lines, it will stay within 1 single cell in the Excel sheet.
you can do the next "\"Value3 Line1 Value3 Line2\"". It works for me generating a csv file in java
Here is an interesting approach using JavaScript ...
String.prototype.csv = String.prototype.split.partial(/,\s*/);
var results = ("Mugan, Jin, Fuu").csv();
console.log(results[0]=="Mugan" &&
results[1]=="Jin" &&
results[2]=="Fuu",
"The text values were split properly");
Printing a HTML newline <br/> into the content and opening in excel will work fine on any excel
You could use keyboard shortcut ALT+Enter.
Select the cell you wish to edit
enter edit mode either by double clicking it or pressing F2
3.Press Alt+enter. This will create a new line in cell
currently I am working on comparison between SICStus3 and SICStus4 but I got one issue that is SICStus4 will not consult any cases where the comment string has carriage controls or tab characters etc as given below.
Example case as given below.It has 3 arguments with comma delimiter.
case('pr_ua_sfochi',"
Response:
answer(amount(2370.09,usd),[[01AUG06SFO UA CHI Q9.30 1085.58FUA2SFS UA SFO Q9.30 1085.58FUA2SFS NUC2189.76END ROE1.0 XT USD 180.33 ZPSFOCHI 164.23US6.60ZP5.00AY XF4.50SFO4.5]],amount(2189.76,usd),amount(2189.76,usd),amount(180.33,usd),[[fua2sfs,fua2sfs]],amount(6.6,usd),amount(4.5,usd),amount(0.0,usd),amount(18.6,usd),lasttktdate([20061002]),lastdateafterres(200712282]),[[fic_ticketinfo(fare(fua2sfs),fic([]),nvb([]),nva([]),tktiss([]),penalty([]),tktendorsement([]),tourinfo([]),infomsgs([])),fic_ticketinfo(fare(fua2sfs),fic([]),nvb([]),nva([]),tktiss([]),penalty([]),tktendorsement([]),tourinfo([]),infomsgs([]))]],<>,<>,cat35(cat35info([])))
.
02/20/2006 17:05:10 Transaction 35 served by static.static.server1 (usclsefat002:7551) running E*Fare version $Name: build-2006-02-19-1900 $
",price(pnr(
user('atl','1y',<>,<>,dept(<>,'0005300'),<>,<>,<>),
[
passenger(adt,1,[ptconly(n)])
],
[
segment(1,sfo,chi,'ua','<>','100',20140901,0800,f,20140901,2100,'737',res(20140628,1316),hk,pf2(n,[],[],n),<>,flags(no,no,no,no,no,no,no,no,no)),
segment(2,chi,sfo,'ua','<>','101',20140906,1000,f,20140906,1400,'737',res(20140628,1316),hk,pf2(n,[],[],n),<>,flags(no,no,no,no,no,no,no,no,no))
]),[
rebook(n),
ticket(20140301,131659),
dbaccess(20140301,131659),
platingcarrier('ua'),
tax_exempt([]),
trapparm("trap:ffil"),
city(y)
])).
The below predicate will remove comment section in above case.
flatten-cases :-
getmessage(M1),
write_flattened_case(M1),
flatten-cases.
flatten-cases.
write_flattened_case(M1):-
M1 = case(Case,_Comment,Entry),!,
M2 = case(Case,Entry),
writeq(M2),write('.'),nl.
getmessage(M) :-
read(M),
!,
M \== end_of_file.
:- flatten-cases.
Now my requirement is to convert the comment string to an ASCII character list.
Layout characters other than a regular space cannot occur literally in a quoted atom or a double quoted list. This is a requirement of the ISO standard and is fully implemented in SICStus since 3.9.0 invoking SICStus 3 with the option --iso. Since SICStus 4 only ISO syntax is supported.
You need to insert \n and \t accordingly. So instead of
log('Response:
yes'). % BAD!
Now write
log('Response:\n\tyes').
Or, to make it better readable use a continuation escape sequence:
log('Response:\n\
\tyes').
Note that using literal tabs and literal newlines is highly problematic. On a printout you do not see them! Think of 'A \nB' which would not show the trailing spaces nor trailing tabs.
But there are also many other situations like: Making a screenshot of program text, making a photo of program text, using a 3270 terminal emulator and copying the output. In the past, punched cards. The text-mode when reading files (which was originally motivated by punched cards). Similar arguments hold for the tabulator which comes from typewriters with their manually settable tab stops.
And then on SO it is quite difficult to type in a TAB. The browser refuses to type it (very wisely), and if you copy it in, you get it rendered as spaces.
If I am at it, there is also another problem. The name flatten-case should rather be written flatten_case.
When I am trying to paste the character » (right double angle quotes) in Unix from my Notepad, it's converting to /273. The corresponding Hex value is BB and the Decimal value is 187.
My actual requirement is to have this character as the file delimiter when I export a .dat file from a database table. So, this character was put in as the delimiter after each column name. But, while copy-pasting, it's getting converted to /273.
Any idea about how to fix this? I am on Solaris (SunOS 5.10).
Thanks,
Visakh
ASCII only defines the character codes up to 127 (0x7F) - everything after that is another encoding, such as ISO-8859-1 or UTF-8. Make sure your locale is set to the encoding you are trying to use - the locale command will report your current locale settings, the locale(5) and environ(5) man pages cover how to set them. A much more in-depth introduction to the whole character encoding concept can be found in Joel Spolsky's The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
The character code 0xBB is shown as » in the IS0-8859-1 character chart, so that's probably the character set you want, so the locale would be something like en_US.ISO8859-1 for that character set with US/English messages/date formats/currency settings/etc.
Sometimes when copying stuff into PostgreSQL I get errors that there's invalid byte sequences.
Is there an easy way using either vim or other utilities to detect byte sequences that cause errors such as: invalid invalid byte sequence for encoding "UTF8": 0xde70 and whatnot, and possibly and easy way to do a conversion?
Edit:
What my workflow is:
Dumped sqlite3 database (from trac)
Trying to replay it in postgresql
Perhaps there's an easier way?
More Edit:
Also tried these:
Running enca to detect encoding of the file
Told me it was ASCII
Tried iconv to convert from ASCII to UTF8. Got an error
What did work is deleting the couple erroneous lines that it complained about. But that didn't really solve the real problem.
Based on one short sentence, it sounds like you have text in one encoding (e.g. ANSI/ASCII) and you are telling PostgreSQL that it's actually in another encoding (Unicode UTF8). All the different tools you would be using: PostgreSQL, Bash, some programming language, another programming language, other data from somewhere else, the text editor, the IDE, etc., all have default encodings which may be different, and some step of the way, the proper conversions are not being done. I would check the flow of data where it crosses these kinds of boundaries, to ensure that either the encodings line up, or the encodings are properly detected and the text is properly converted.
If you know the encoding of the dump file, you can convert it to utf-8 by using recode. For example, if it is encoded in latin-1:
recode latin-1..utf-8 < dump_file > new_dump_file
If you are not sure about the encoding, you should see how sqlite was configured, or maybe try some trial-and-error.
I figured it out. It wasn't really an encoding issue.
SQLite's output escaped strings differently than Postgres expects. There were some cases where 'asdf\xd\foo' was outputted. I believe the '\x' was causing it to expect the following characters to be unicode encoding.
Solution to this is dumping each table individually in CSV mode in sqlite 3.
First
sqlite3 db/trac.db .schema | psql
Now, this does the trick for the most part to copy the data back in
for table in `sqlite3 db/trac.db .schema | grep TABLE | sed 's/.*TABLE \(.*\) (/\1/'`
do
echo ".mode csv\nselect * from $table;" | sqlite3 db/trac.db | psql -c "copy $table from stdin with csv"
done
Yeah, kind of a hack, but it works.