How to word question to indicate default answer in console application?
Coca-Cola size [] Medium [S] Small [L] Large
Coca-Cola size [Enter] Medium [S] Small [L] Large
Coca-Cola size [Medium] [S] Small [L] Large
Other suggestions?
I like the way certain Linux stuff does it:
Allow non-root access (Y/N) [default=N]: _
For your specific use case, you could do:
Coca-Cola size (M)edium, (S)mall or (L)arge [default=M]: _
Related
I have only one requirement. I need to read PDF page size and determine if page is not bigger then 17x17 inches to not send it to some external service which rejects such pdfs.
Is there any free library working on .NET Core? I wasn't able to find it. Or maybe anyone implemented this by reading binary file?
A pdf does not HAVE TO declare page size externally since every page can be a different size thus 100 pages may be 100 different page sizes.
However many PDF will contain a text entry for one or more pages so you can (depending on construction) parse as text for /MediaBox and or potentially /CropBox dimensions.
So the first PDF example I pick on and open to search for /MediaBox in WordPad tells me its 210 mm x 297 mm (i.e my local A4) /MediaBox [0 0 594.95996 841.91998] and for a 3 page file all 3 entries are the same.
you can try that using command line as
type "filename.pdf" | find /i "/media"
but may not work in all cases so a bigger chance of result (but more chaff) is
type "filename.pdf" | findstr /i "^/media ^/crop"
The value is based on the default number of point size units per inch (so can be divided by 72 as a rough guide), however, thats not your aim since you know you dont want more that 17x72=1224.
So in simple terms, if either value was over 1224 then I could reject as "TOO BIG".
HOWEVER I need to also consider those two 0 values, thus if one was +100 then the limit becomes 100 more and more importantly, if one was -100 then your desired 17" restriction will fail at 1124.
So you can write in any method or language (even CMD) a simple test, however, that will require too much expanding to cover all cases, SO:-
Seriously I would use / shell a one line command tool like xpdf/poppler pdfinfo to parse all different types of PDF and then grep that output.
The output is similar for both with many lines but for your need
xpdf\pdfinfo -box filename
gives Page size: 594.96 x 841.92 pts (A4) (rotated 0 degrees)
and
poppler\pdfinfo -box filename
gives Page size: 594.96 x 841.92 pts (A4)
Thus to check the file does not exceed 17" (in either direction) it should be easy to set a comparison testing that both values are under 1224.01
I have site in word press i have installed mod page on my aws server.
i have used combine_css,combine_javascript filter but it not combining js and css in one line its combining in different different groups.
So i want combine my js,css file in one line for 2 different files like js and css.
I have tried bellow setting in /pagespeed.conf file.but it's not working
My page speed setting :-
ModPagespeedRewriteLevel PassThrough
ModPagespeedEnableFilters add_head,combine_css,combine_javascript,convert_meta_tags,extend_cache,fallback_rewrite_css_urls,flatten_css_imports,inline_css, inline_import_to_link,
inline_javascript,rewrite_css,rewrite_images,rewrite_javascript,rewrite_style_attributes_with_url
ModPagespeedFileCacheSizeKb 102400
ModPagespeedFileCacheCleanIntervalMs 3600000
ModPagespeedLRUCacheKbPerProcess 1024
ModPagespeedLRUCacheByteLimit 16384
ModPagespeedCssFlattenMaxBytes 2048
ModPagespeedCssInlineMaxBytes 2048
ModPagespeedCssImageInlineMaxBytes 0
ModPagespeedImageInlineMaxBytes 3072
ModPagespeedJsInlineMaxBytes 2048
ModPagespeedCssOutlineMinBytes 3000
ModPagespeedJsOutlineMinBytes 3000
ModPagespeedMaxCombinedCssBytes -1
ModPagespeedMaxCombinedJsBytes 92160
ModPagespeedCombineAcrossPaths off
If you have a recent version of mod_pagespeed installed, viewing the html source for a page that displays the problem with ?PageSpeedFilters=+debug appended to the querystring should help. The debug filter should write a comment about why combining failed.
I wrote a short piece about that:
http://blog.iispeed.com/2015/04/debugging-page-speed-debug-filter.html
I have the following munin-generated graph of nginx requests:
What does the 'm' in the y-axis mean?
The nginx munin plugin at /usr/share/munin/plugins/nginx_request is extracting:
if ($response->content =~ /^\s+(\d+)\s+(\d+)\s+(\d+)/m) {
print "request.value $3\n";
Which means it is taking the third component of nginx_status, which appears to be the total accumulated request count. Here is an example execution from this same server:
$ curl http://127.0.0.1/nginx_status
Active connections: 1
server accepts handled requests
2936 2936 4205
Reading: 0 Writing: 1 Waiting: 0
The munin nginx plugin is passing the following to rrdtool:
print "graph_title Nginx requests\n";
print "graph_args --base 1000\n";
print "graph_category nginx\n";
print "graph_vlabel Request per second\n";
print "request.label req/sec\n";
print "request.type DERIVE\n";
print "request.min 0\n";
print "request.label requests port $port\n";
print "request.draw LINE2\n";
The 'm' is the 'milli' prefix for the units. So, 400 m means 0.400.
By default, RRDTool uses the SI prefixes -- 2000 is shown as 2k, 0.01 is shown as 10m and so on. This isn't normally an issue except when there are no units or the thing being measured doesnt make sense in fractional parts.
The way to stop this behaviour is to not use the %s in the GPRINT (this fixes the legend), and to use the --units-exponent=0 option (this fixes the Y-axis). I don't know that it is possible to make munin do this, though. You might be able to modify the plugin to add '--units-exponent 0' to the graph_args though.
I noticed this question has been asked many times in the past and surfing the web I found many pages about it. However, it seems like the proposed solutions rarely work and, in my case, the problem does not refer to a program that I wrote. So I'll give it another try here.
I recently installed Linux Mint 14 on my laptop. After the OS was spick and span, I started to install the software I need, and among these netgen (a Mesh Generator software). I tried both ways: download+unpack+compile+install and synaptic. Either way, this is the output I get when I try to execute the program
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 490
Current serial number in output stream: 491
As I said, I surfed the web, and apparently, this is thought to be linked to some problem in the X server configuration. And here start the mess. Someone says I should modify /etc/X11/Xorg.conf, adding the lines
Option "Videoram" "65536"
Option "Cachelines" "1980"
Under the section "Device." Unfortunately, I have no such file, as apparently in recent distros, the X configuration file has been moved to /usr/share/X11/xorg.conf.d/* and it's now split into different files. The one about the monitor and graphics should be called 10-monitor.conf...which I don't have. I tried to create one, following the instruction at this link, and then add those lines, but nothing happened. To be fair, I'm not 100% sure I generated the file correctly since I am not sure how to detect the driver for my graphics card.
I don't know how much and which information people would need to have an idea of how to fix this problem. Here's what I see might be useful.
The output of 'lspci | grep VGA' is
01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee
ATI RS880M [Mobility Radeon HD 4200 Series]
My current /usr/share/X11/xorg.conf.d/10-monitor.conf is the following
Section "Monitor"
Identifier "Monitor0"
Modeline "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync
EndSection
Section "Device"
Identifier "LVSD"
Driver "fglrx" #Choose the driver used for this monitor
EndSection
Section "Screen"
Identifier "Screen0"
Device "LVDS"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "1920x1080_60.00" "1366x768"
EndSubSection
EndSection
Under the section "Device." Unfortunately, I have no such file
Try creating your own xorg.conf file, placing it in this location will override your X settings after restarting X or simply by restarting the computer.
mkdir -p /etc/X11/xorg.conf.d/
cp /etc/X11/xorg.conf.d/xorg.conf /etc/X11/xorg.conf.d/xorg.conf.bk # in case it exists
cp /usr/share/X11/xorg.conf.d/10-monitor.conf /etc/X11/xorg.conf.d/xorg.conf
The content of /etc/X11/xorg.conf.d/xorg.conf would look like (adding your options):
Section "Monitor"
Identifier "Monitor0"
Modeline "1920x1080_60.00" 172.80 1920 2040 2248 2576 1080 1081 1084 1118 -HSync +Vsync
EndSection
Section "Device"
Identifier "LVSD"
Driver "fglrx" #Choose the driver used for this monitor
Option "Videoram" "65536"
Option "Cachelines" "1980"
EndSection
Section "Screen"
Identifier "Screen0"
Device "LVDS"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "1920x1080_60.00" "1366x768"
EndSubSection
EndSection
This also could be related to the driver you're using, there are other common drivers available like
Driver "fbdev"
Driver "vesa"
Driver "fglrx"
The fbdev driver supports all hardware where a framebuffer driver is available.
The vesa driver supports most VESA-compatible video cards. There are some known exceptions, and those should be listed here.
fglrx is a X.org(7x) driver for ATI (Mobility(TM)) RADEONĀ® and (Mobility(TM)) FireGL(TM) based video cards. The driver provides hardware acceleration for 3D graphics and video playback. It includes support for dual displays, TV Output and as of version 8.21.7 also OpenGL 2.0 (GLSL).
Depending on which driver you choose, certain options/functionality/compatibility would be enabled or not, you could change the driver and test with the options you said would work.
Finally, you have hundreds of options here to play with X11.
When I had made written a lot text to my program I took similiar error. When I reduce the text size the error was disappeared. I think you should reduce too showing things or reorganise how they are looking on screen.
I hope I could help you with this broken English ;)
I have a very large file (~10 GB) that can be compressed to < 1 GB using gzip. I'm interested in using sort FILE | uniq -c | sort to see how often a single line is repeated, however the 10 GB file is too large to sort and my computer runs out of memory.
Is there a way to compress the file while preserving newlines (or an entirely different method all together) that would reduce the file to a small enough size to sort, yet still leave the file in a condition that's sortable?
Or any other method of finding out / countin how many times each line is repetead inside a large file (a ~10 GB CSV-like file) ?
Thanks for any help!
Are you sure you're running out of the Memory (RAM?) with your sort?
My experience debugging sort problems leads me to believe that you have probably run out of diskspace for sort to create it temporary files. Also recall that diskspace used to sort is usually in /tmp or /var/tmp.
So check out your available disk space with :
df -g
(some systems don't support -g, try -m (megs) -k (kiloB) )
If you have an undersized /tmp partition, do you have another partition with 10-20GB free? If yes, then tell your sort to use that dir with
sort -T /alt/dir
Note that for sort version
sort (GNU coreutils) 5.97
The help says
-T, --temporary-directory=DIR use DIR for temporaries, not $TMPDIR or /tmp;
multiple options specify multiple directories
I'm not sure if this means can combine a bunch of -T=/dr1/ -T=/dr2 ... to get to your 10GB*sortFactor space or not. My experience was that it only used the last dir in the list, so try to use 1 dir that is big enough.
Also, note that you can go to the whatever dir you are using for sort, and you'll see the acctivity of the temporary files used for sorting.
I hope this helps.
As you appear to be a new user here on S.O., allow me to welcome you and remind you of four things we do:
. 1) Read the FAQs
. 2) Please accept the answer that best solves your problem, if any, by pressing the checkmark sign. This gives the respondent with the best answer 15 points of reputation. It is not subtracted (as some people seem to think) from your reputation points ;-)
. 3) When you see good Q&A, vote them up by using the gray triangles, as the credibility of the system is based on the reputation that users gain by sharing their knowledge.
. 4) As you receive help, try to give it too, answering questions in your area of expertise
There are some possible solutions:
1 - use any text processing language (perl, awk) to extract each line and save the line number and a hash for that line, and then compare the hashes
2 - Can / Want to remove the duplicate lines, leaving just one occurence per file? Could use a script (command) like:
awk '!x[$0]++' oldfile > newfile
3 - Why not split the files but with some criteria? Supposing all your lines begin with letters:
- break your original_file in 20 smaller files: grep "^a*$" original_file > a_file
- sort each small file: a_file, b_file, and so on
- verify the duplicates, count them, do whatever you want.