CSS file appears as rubbish - css

I've been using Cloud 9 (ace-editor) for a while, and recently we made an local server-cache solution that allow us to store local cachefiles of previewfiles from cloud9.
When I include a file directly from preview.c9, it works! But when I run the local cache version I end up with a file looking like this
‹������ÅXmoÛ6þî_Á¹Ü‘å·$ŽŠaÅú2Y»�Éö¥ÛJ¢mV/Ô(*MZô¿ïŽ/2¥ÈA‹í‡$6yw<Þ=÷Ü1Ï’•5Sdüçõ«`=~:z¶¥
64aäãˆû­àù]D&q“òD”“§°SË$"ÌOÂ0ͧ©U,n›šIP¬TÓDa.V'ç'³³y˜¤e Y-™°:Lr2ÁV6¥
+)2æ 'ƒ&—¸’†xxhÏœ2¡~^-ÖëÓ“³ÅäÉ÷öàg~;y’U'¬ˆYš²4Üë®b“'Çß2<ïÅfãŧõ׿±+J
z¢dþC`ê›m—GÀû¼Á¶A“Fú{Æ·;‘³š#Èôr­îr¶_ý4ŠEzw<ªŽGuEËãÝ×JÍ?€ä|]ݶÚjËçòš\Ѳž“þP*|ótôi´›÷õWV¿ rËË J‰""Ëš…E_a±òt·ˆEž¢‰È…ŒÈ£Ùl†ÚÓX•Z?2e°1¯nI-rž¢ÈâùÉ
•Ìf iÊ›:" é‚ѹáÞ¶STìV)K„¤Š‹ÃV2Ô®DÍÍŠd9ìÝèÕ”×UN]x™ó’q.’ŒüÀ‹JHEK¥5išòr‘Ss¼þÈFú*•‘r¶ô­N䢘A±ZÛ»Bc¸y£»‚ç[F‹ÈxŒ_cšd[)š2
xA·oäÁRU}&*)ÊtÇ8^±
y!’!zU¿ÃS€¥æ8ï(É*F56Fñ#º&4`“ÿ÷<U»ˆ,t�:ž`âM(/QÕ„jn!…á|«�nLý4ŽsZŽÿéDVšƒVÈGt®µÚˆMþfñìTw]J®l%¢#¦ÄQµÙ#¹{.ÏážG!¬Ù›ÒF ñu’ÃéV¢ŠH�à÷£1û¿Y8Eê˜Ò"UCÔUÒÛ$§u
1C,½ ݇ã=qú?pÙ‰è؆TCÔÅËáÂ1†p#• (¹²YÔµT¤ƒfxÈÓðF�øÓÍÐcººÛ�RIJ뢇_4¥#ìûáGªØø•-��j²)K}}³³3t§Ÿ9Ÿ‹|ðï,Ò}”«
Çm`Ïh\Ìá{'¢ֺ-¤¹-Ñ>ÅÛå=d}²üð2e·Pà÷£CøFÒÂðÛÏ–ßØ3©ð¨€Ö
Ž®`ˆåçƒÜÌaÍOškS³a¼\!²Ì¼ÐnÞ±èëáEgÚ«>›˜™`OKºÿb<ÔÛfH+PÖ–¿°Óú°¶¬‡·MÍt~Â
]¾v08 =â``ìöÙSëW[_~GC¼¶Éëwi0?]{øûõyðœ–
t»Yßo7_—7ô_
m‡àÁªuÖ¶×躳¤[í´¿¶ýÀ¼¥n9U'¥Èô(DH>~RôÈçÓÝl¢hœú°³ÉiUÃ|ã>íÁÀä›è™˜h¨ý* 3nE%¼†P ÇH½GŸ½›8úµ³îË”«×¬l®¹ÂQüx%ÖÌÏ0p ùç;ZnÙËœpÞ5<¿"‚ï
Üršo€#rÅ2±jÑÐõRçó³­¼fy.
…<’£~e‹þ£wžõô*ãyÎ4×hO»î\JžeL1CFž)¿õvUzýutæ¦úwP+É+è­®\Óüˆ‰V4u-í0F;&„¯á‘ÉÊàùRnQiw2uá\µ—Õ”:Ü_˜¤kɲ¬® päóÏ í«¥wÄEI«ÊØw;ÆÍßé–d¸ÙªïŸNÒºù[†C·‘ÓcÐÏÀÅ/í|î n%­v¢¿JªŸvtªí%úQã§Äøx‘H¿gLØû×=¹¸‰]±º»X|—,Æv=qü·LÆî¿A4†­«(Js5 ú‹l çžÕ*Jš>×uâBox’7Ï´®Ø%>D0µ:F5¼>N½Rï˜Äd+˜Šd#î…È,~O0¡9+ÓA«z«s›ÙnȨѭ†Gúél¼„øýÊ{#¶-ȼ´¸ÆÉaºL¡gù
1‡Î鬧 ‘ó€BÊPâ] è°Òz‰­ÔwËé°
oT�P¨Ìú*X‡5ÎÙyï�Øâñ¢'ŽX–Çì-ú±Ä”^¯úÆ쪜Ÿë\x‰0Ø;¨°ŽÏô`lþZvx²��
Any idea of whats cousing this?
PS: Sorry for poor english

Found an error in the encoding import.
I will return with the error / and fix as soon I get it firmly tested.

Related

May need either decrypt or lua conversion of some type?

I don't have VB or anything installed, as I have absolutely no clue how to properly code (I can read and understand super basic code) - but I have no clue about any functions/ methods etc.
I've got a Lua file that I want to decode/ be able to actually read this file. From what I've read here on stack overflow, I've gathered maybe it's not encrypted, rather in lua crypt or something? Any input would be much appreciated! Thanks!
Here's the code:
FXAP JÈŸ‘Ä#I«ILmOÿ‰ý­Ún_ô­0J8ôã*2-¿ã´*¶p‹ý
Ëí}ý_A>¢AaÁ`®E÷„êÝ6g¿¿£¹®îãöà€•ãûÛ'—À]M™)bñà?W·S ›(Vâ†É(þ“ñמQ&|#Å´c•HX²¡<d¿CÖÖ[d×A“¨ò“>'fÈîÍ ¿¥±v´ô2Ys2Ñ‚:b/¥˜à¹Ü¬Å H~ß 9SºM‘FCú®}Ñ:Šè|¹7|]òC§CcKX|¾,#
Ž¤6ÚÖ\ºÂZ†ãï¥~t,‘?±…÷ï/è’±Q_™œê'ôYÖenŠ`äCÑ#“IÅ_Kà€þÒ´ek:QýÚ‚ò­&ÿO±!ÁÓGâhÏĽD²ÓdŸñÎ^6D3òÚ„üD?ûÕá$‚eÒäúÞ5î72ä±dv¬]hHƒ
Y¤4 ÑÊ#!³-(icæ…*¢ƒ¿þÛLþÉ®±ß¶)îFe!S$ÜS|ƒ¹C¶­hl—Ã-í®Ì:+ôÆSD¦¨ÌÅfÌhÕnÉ:_cÔ·Âä"ÏpßÅ7vÅi¸€ß†Mf~÷IBgÕ½
#ÊEüÈÉÙ5¥´·6g†^.æ`Z/Ð[ÏcÁÊ8ô4y[›—A¨›ÿ0j’ºrèyÖ$ÇÿD7"7}è:g
|Ðò¢n¬m8-`{I²Èû«°6˜ê”×o<ñ9*FÔåeDˆ€Åûà]ªý·gÈÿ¥íªC­ÏâÜá0Ðf¯uÒ·’Æ1<L¹±M¤˜~ïÒ)ÑfQÃq\aň½3K ÐàÀ}ŒXÂœg°’¢d|¹ÛŠ"®£öƒ?È B¼4®½ÎmŸ—r¾)Ù’dçÒ
>L©NN*†q&NòbOñ«ªŸÅ÷S[;×úB ÉS!˼×Yö“í«ÚXÎ]óÜ”®V Véeú"ˆZåZÃE/5GïÊýUÉd–‚ /¼Rd—ƒÅ%Ñà_ÚuŽõ¨·çö}ˆ /y“Ùèø…°åñ ˜Žî$¬¸NfHþqó•¨=€¦}d¨.îÓ±"ÂnãR =8Fx<›ötuèu‘í*Ÿxa Õ½
ç2ÃÓ8¯ —û–7,Š´2ý’5êÒfRè×íX¼’ühA"µsEÕƒ†×¯!Œ˜rp²Gòä×`þ/ àÃ(%B˜UÀð´÷²©ÏëÇ
wY•ƒå™/×)ú·]qÇx›P
~d{#
¶ ¬À€r~…Ûíü£Vn}QÙäJÜ[³dJÁš8±ÌèÏb×ÿÆñ'*ƒŠ*K`›éw=Þ¢’®ÁK¿šÍ"¯×¹
HdÈQ­ãí–y}3ÂOäÞÃRÇ'uG& VY1
ôÚ<lk ®·Êï­Ý²´ÄEô+}ƒÞó„`Txs¨Ý©ãáOkÂCÇ.’á„oÚæ0‚ê¶iø̉æ9Ä9ºík]y¿ñI‚Õ=\{ñü•ñÎáÚ4Ù$íi—˜ÎEB‘Šªmha-9÷œû
Tried basic research, online encryption detection websites (None worked)

Colab not recognising an existing directory

I have been trying to run an openpose model on colab but havent been able to do so because Colab doesn't recognise the directory. Screenshot of code
I have provided the code screenshot in this message, any help or direction will be highly appreciated!
Edit 1: A modification from the first answer
code:
!cd openpose && ./build/examples/openpose/openpose.bin -image_dir /drive/My\ Drive/research_project/Fall\ Detection/$category/testdata/video$video --render_pose 0 --disable_blending -keypoint_scale 3 --display 0 -write_json /drive/My\ Drive/research_project/Fall\ Detection/$category/jsondata/video$video
output:
Error:
Folder /drive/My Drive/research_project/Fall Detection/Coffee_room/testdata/video0/ does not exist.
I believe you need to remove the '..', as you are already in the '/content' folder from the os.chdir('/content') command
If that's not it, you also have a missing '/research project' after '/My Drive' in the line before the last
with the %cd operation you already moved yourself to [...]/Coffee_room/testdata, so when you try and os.chdir command, it throws an error. At least I think so, the screenshot doesn't let me copy the code to try and recreate the same situation, so it's a bit hard
Try to put your code in the right format inside the question like this
print('Hello, this is my code')

How to alias an output for automatic connection?

I currently have this code:
self.add_subsystem('IntegrateForTheta2Ue6', utilities.CumulativeIntegrateTrapeziums(n=n),
promotes_inputs=[('x', 'panel_lengths'),
('x0', 'stagnation_point_position'),
('y', 'ue5'),
('y0', 'panel_start_external_tangential_velocity')],
promotes_outputs=[('cumulative_integral', 'intue5')])
self.add_subsystem('ThwaitesCalculateMomentumThickness', ThwaitesCalculateMomentumThickness(n=n),
promotes_inputs=['external_tangential_velocities',
'intue5',
'kinematic_viscosity'],
promotes_outputs=['momentum_thickness'])
It does not throw any errors when run, but when debugging it is clear that the output for intue5 aka cumulative_integral is not being passed into ThwaitesCalculateMomentumThickness - it appears as all ones. When I try the above with self.connect('IntegrateForTheta2Ue6.intue5', 'ThwaitesCalculateMomentumThickness.intue5'), I get Attempted to connect from 'IntegrateForTheta2Ue6.intue5' to 'ThwaitesCalculateMomentumThickness.intue5', but 'IntegrateForTheta2Ue6.intue5' doesn't exist.
Am I making a mistake in my output aliasing, or is this a bug?
Updating to the latest version of OpenMDAO worked. I believe I was already calling run_model(), so I'm not sure why it wasn't working.

How to save file into a path containing special characters such as '&'? ('&' which is different from '&' typed in English Keyboard)

I need to write out a file to a certain path that contains a special character in R. the path is something like this: C:/Users/Technology & Innovation/Webscraping files/US_data/data
It works totally fine when I access this path through python, but I cannot access the same path in R. And I cannot change this path name or remove '&' as this path is used by a lot of people. Does anyone have a good idea on how to solve it?
I found out it is '&' which has subtle difference from '&' that we usually type in through English Keyboard. May be that's the reason causing the problem?
Here is what I have tried:
write.csv(df, 'C:/Users/Technology & Innovation/Webscraping files/US_data/data/file.csv').
write.csv(df, 'C:\\Users\\Technology & Innovation\\Webscraping files\\US_data/data/file.csv')
Not matter whether I try to read or write a file, it is not working in my case.
I also tried reset the working directory path and got the error message:
Error in setwd("C:/Users/Technology & Innovation/Webscraping files/US_data/data") : cannot change working directory
Write it like this
C:\\Users\\Technology & Innovation\\Webscraping files\\US_data\\data
also, you can change your current directory.
Changing your current directory will help you because you can write read.csv("filename.csv") or write.csv(name_of_file, "filename.csv") as it is without mentioning path.
If you have to write a file you have to use syntax properly.
write.csv(C:\\Users\\Technology & Innovation\\Webscraping files\\US_data\\data,"filename.csv")

How to import Geonames into SQLite?

I need to import the Geonames database (http://download.geonames.org/export/dump/) into SQLite (file is about a gigabyte in size, ±8,000,000 records, tab-delimited).
I'm using the built-in SQLite-possibilities of Mac OS X, accessed through terminal. All goes well, until record 381174 (tested with older file, the exact number varies slightly depending on the exact version of the Geonames database, as it is updated every few days), where the error "expected 19 columns of data but found 18" is displayed.
The exact line causing the problem is:
126704 Gora Kyumyurkey Gora Kyumyurkey Gora Kemyurkey,Gora
Kyamyar-Kup,Gora Kyumyurkey,Gora Këmyurkëy,Komur Qu",Komur
Qu',Komurkoy Dagi,Komūr Qū’,Komūr Qū”,Kummer Kid,Kömürköy Dağı,kumwr
qwʾ,كُمور
قوء 38.73335 48.24133 T MT AZ AZ 00 0 2471 Asia/Baku 2014-03-05
I've tested various countries separately, and the western countries all completely imported without a problem, causing me to believe the problem is somewhere in the exotic characters used in some entries. (I've put this line into a separate file and tested with several other database-programs, some did give an error, some imported without a problem).
How do I solve this error, or are there other ways to import the file?
Thanks for your help and let me know if you need more information.
Regarding the question title, a preliminary search resulted in
the GeoNames format description ("tab-delimited text in utf8 encoding")
https://download.geonames.org/export/dump/readme.txt
some libraries (untested):
Perl: https://github.com/mjradwin/geonames-sqlite (+ autocomplete demo JavaScript/PHP)
PHP: https://github.com/robotamer/geonames-to-sqlite
Python: https://github.com/commodo/geonames-dump-to-sqlite
GUI (mentioned by #charlest):
https://github.com/sqlitebrowser/sqlitebrowser/
The SQLite tools have import capability as well:
https://sqlite.org/cli.html#csv_import
It looks like a bi-directional text issue. "كُمور قوء" is expected to be at the end of the comma-separated alternate name list. However, on account of it being dextrosinistral (or RTL), it's displaying on the wrong side of the latitude and longitude values.
I don't have visibility of your import method, but it seems likely to me that that's why it thinks a column is missing.
I found the same problem using the script from the geonames forum here: http://forum.geonames.org/gforum/posts/list/32139.page
Despite adjusting the script to run on Mac OS X (Sierra 10.12.6) I was getting the same errors. But thanks to the script author since it helped me get the sqlite database file created.
After a little while I decided to use the sqlite DB Browser for SQLite (version 3.11.2) rather than continue with the script.
I had errors with this method as well and found that I had to set the "Quote character" setting in the import dialog to the blank state. Once that was done the import from the FULL allCountries.txt file ran to completion taking just under an hour on my MacBookPro (an old one but with SSD).
Although I have not dived in deeper I am assuming that the geonames text files must not be quote parsed in any way. Each line simply needs to be handled as tab delimited UTF-8 strings.
At the time of writing allCountries.txt is 1.5GB with 11,930,517 records. SQLite database file is just short of 3GB.
Hope that helps.
UPDATE 1:
Further investigation has revealed that it is indeed due to the embedded quotes in the geonames files, and looking here: https://sqlite.org/quirks.html#dblquote shows that SQLite has problems with quotes. Hence you need to be able to switch off quote parsing in SQLite.
Despite the 3.11.2 version of DB Browser being based on SQLite 3.27.2 which does not have the required mods to ignore the quotes, I can only assume it must be escaping the quotes when you set the "Quote character" to blank.

Resources