How do you use csslint from the command line? - css

I'm new to Ubuntu, I'll be the first to admit that, but I need to integrate csslint into a CI build script and I found the about page on the csslint site unhelpful. It gave me two options for installing csslint but they require either node or rhino:
Node.js:
sudo npm install -g csslint
Rhino.js:
java -jar rhino.jar csslint-rhino.js --rules= ~ /* suppressed for simplicity */
After googling for either of these in a format I know how to work with, on google groups I found someone asking a similar question. The answer was that rhino is not a ready made product you install and run but a library you work with.
All I need to do is:
pass CSS files to csslint through bash with csslint args
get the response back
evaluate whether or not the build should fail due to violations
Is there anywhere I can find step-by-step instructions that include dependencies such as node or rhino?
Thanks everyone.

Found the answer myself and decided to put it here to help others.
sudo apt-get install npm
sudo npm install -g csslint
csslint pathtofile.css
have feelings hurt.

Related

Error: EACCES: permission denied, unlink '/usr/local/bin/npm

First and foremost I have looked into these previous post for answers: EACCES: permission denied, unlink
Error: EACCES: permission denied, unlink '/usr/local/bin/npx'
Error while building or running ngx-bootstrap tests
I do not see the answers I seek in any of these posts, or maybe I am not knowledgable enough to decipher how to use those answers to fix my issue. I am still learning so please, if you are answering my post make sure you explain, so that I may fully grasp what is being done and why. I would appreciate it very much.
In VS Code I was trying to work on a project but needed to update the npm version. See example:
After researching the f***(pardon my language) out of this, I did the following:
after reading this article: https://flaviocopes.com/npm-fix-missing-write-access-error/
I did what he suggested and then tried to do the npm update.
I then did the following, to find who owns the directory
After this I am pretty much at lost of what to do next, why do I have three 'drwxr-xr-x'? what does that exactly mean, that I have three npm packages? can I combine them all into one? or would it be better to delete all and start from scratch, but would I run into the same issue?
I also read this:
To minimize the chance of permissions errors, you can configure npm to
use a different directory. In this example, you will create and use a
hidden directory in your home directory.
Back up your computer. On the command line, in your home directory,
create a directory for global installations: mkdir ~/.npm-global
Configure npm to use the new directory path: npm config set prefix
'~/.npm-global' In your preferred text editor, open or create a
~/.profile file and add this line: export PATH=~/.npm-global/bin:$PATH
On the command line, update your system variables: source ~/.profile
To test your new configuration, install a package globally without
using sudo
but will that work if I have to [unlink '/usr/local/bin/npm'], according to the terminal. In all honestly, I prefer not reconfigure as I would need to back up everything, does anyone have a solution or suggestion as to what to do?
Thank you all in advance. And again I would like to reiterate that I am still learning, so please be kind and elaborate on your answer.
See if you have another path setup
like **
/usr/local/share/npm/bin
**
or so and just run install like this below.
**
sudo npm install npm#latest -g
**

How to install JSHint

I get this popup in VSCode: Failed to load jshint library. Please install jshint in your workspace folder using 'npm install jshint' or globally using 'npm install -g jshint' and the press Retry
Tried using Command Prompt in Win10
Can I get a literal step by step process Please.
(Please Note I am absolutely New to all of this, Currently Learning HTML/CSS & the above is popped up)
Thanks in Advance
QB
just follow the prompt given.
open Command Prompt in Win10.
and for all future work. do this.
npm install -g jshint
and
-g is for adding it globally.
For future reference - always try to copy and paste the error/prompt in Google first and you will be able to find answers yourself and will be able to find more and learn more stuff related to issues after seeing how many people get same thing in start and how they fixed it.
For future reference - BOOKMARK THIS - https://www.w3schools.com/whatis/whatis_npm.asp there's a lot for a beginner in simple words. hope this helped you.

How do you create a fake install of a debian package for use in testing?

I have a package that previously only targeted RPM based distros for which I am now building .deb packages for Debian based distros.
The aim is to simulate a test installation from user-space that is isolated from the system you are building on. It may be multi-user and you do not want to require root access just to build the software. Many of our tests simulate the installation directory structure already. This is for the next step up to simulate an actual installation using packages built.
For the RPM packages I was able to create test installations using:
WSDIR=/where/I/want/my/tests/to/run
rpmdb --initdb --dbpath "$WSDIR"/rpmdb
rpm --relocate /opt="$WSDIR"/opt --dbpath $WSDIR/rpmdb -i <package>.rpm
The equivalent in the Debian world is something like:
dpkg --force-not-root --admindir=$WSDIR/dpkg --root=$WSDIR/install --install "$DEB"
However, I am stuck over the equivalent to the rpmdb --initdb step.
Note that I can just unpack the archive using:
dpkg-deb -x "$DEB" $WSDIR/install
But I would prefer to be closer to how a real package is installed.
Also I don't think this will run preinstall and postinstall scripts.
Similar questions have suggested using deboostrap to create a chroot environment but this creates a complete new installation. As well as being overkill it is too slow for an automated test. I intend to use this for quick tests of the installation package prior to further testing in actual test environments.
My experiments so far:
(cd $WSDIR/dpkg && mkdir alternatives info parts triggers updates)
cp /var/lib/dpkg/status $WSDIR/dpkg/status
have at best resulted in:
dpkg: error: unable to access dpkg status area: No such file or directory
which does not indicate clear what is wrong.
So how do you create a dpkg admin directory?
Cross posted as https://superuser.com/questions/1271145/how-do-you-create-a-dpkg-admin-directory
Update 24/11/2017
I've tried copying using the dpkg dir from an environment created by [cowdancer][1] (which uses deboostrap under the hood) or copying the real one from /var/lib/dpkg but I still get the same error message so perhaps the error (and/or the --admindir option) doesn't mean quite what I think it means.
Note that:
sudo dpkg --force-not-root --root=$WSDIR/install --admindir=/var/lib/dpkg --install "$DEB"
does work. So it is something to do with the admin dir.
I've also retitled the question as "How do you create a dpkg admin directory" is interesting question but the answer is not necessarily the solution to my problem.
The minimal way to create a dpkg database is something like this:
$ mkdir -p db/{updates,info}
$ touch db/{status,diversions,statoverride}
If you want to use that as non-root, currently the best way is to use fakeroot.
$ mkdir -p fsys
$ PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --log=/dev/null --admindir=db --instdir=fsys -i pkg.deb
But take into account that passing --root after --admindir or --instdir will reset those paths, which is I think the problem you have been having here.
Also using sudo and --force-not-root does not make much sense? :) And is definitely less confined than using just fakeroot. In the near future it will be possible to run dpkg fully unprivileged in some local tree.
I eventually found an answer for this. Thanks to Guillem Jover for some of this.
Pasting a copy of it here:
mkdir fake
mkdir fake/install
mkdir -p fake/dpkg/info
mkdir -p fake/dpkg/updates
touch fake/dpkg/status
PATH=/sbin:/usr/sbin:$PATH fakeroot dpkg --force-script-chrootless --log=`pwd`/fake/dpkg.log --root=`pwd`/fake --instdir `pwd`/fake --admindir=`pwd`/fake/dpkg --install *.deb
Some points to note:
--force-not-root is not enough. fakeroot is required.
ldconfig and start-stop-daemon must be on the path.
(hence PATH=/sbin:/usr/sbin:$PATH)
The log file needs to be relocated from the default /var/log/dpkg.log
The order of arguments is significant. If used --root must be before --instdir and --admindir.
The admindir is supposed to have a the installation dir as a prefix.
If the package contains any pre or post installation scripts (preinst,postinst) then --force-script-chrootless is required as these scripts are normally run via chroot() which gives operation not permitted when attempted under fakeroot.
For a quick test of trivial dependencies, you can directly install on the system using 'dpkg -i' then 'dpkg -P' and 'apt-get autoremove' to purge the package and clean the dependencies.
An other more secure but slower solution could be to use the autopkgtest package:
https://people.debian.org/~mpitt/autopkgtest/README.package-tests.html

Trouble building sqlite 3.7.4 on CentOS 5.5 to include readline support

The readline library allows the CLI for sqlite to accept arrow keys to recall previously typed commands. I can build without this and sqlite works, it's just a hassle not having this nice capability. I've installed readline-devel from yum and /usr/lib64/libreadline.so.5 is present as well as it's header files. When I run ./configure to build sqlite, I see these lines:
checking for library containing readline... no
checking for readline... no
The library path is set to the correct path:
LD_LIBRARY_PATH="/usr/lib64:/usr/local/lib:/lib:/usr/lib"
By default, ./configure does try to include readline support so there are no special "--with-XXXX" options needed.
Anyone every seen this problem? I need to use this newer version to get latest foreign key support. It's hassle running on CentOS as is bundles pretty old versions of apps but we don't have a choice right now and I cannot find an updated RPM with newer version of sqlite.
=== UPDATE ===
Ok, I found a solution but I don't completely like it...
First, I tried with this option:
./configure CPPFLAGS="-I/usr/include/ -DHAVE_READLINE"
That causes the readline functionality to get compiled into shell.c which is what is needed for starters. But, the linking fails because it cannot find libreadline. The only 'kludgy' way I could figure out to get it to link was to manually edit the Makefile after running the above ./configure command. I changed this line:
LIBS = -ldl -lpthread
to this:
LIBS = -ldl -lpthread -lreadline -ltermcap
Then I ran "make clean all" and "make install" and the readline functionality works in the CLI.
I tried every way I could think of to pass in the extra libraries including exporting using LIBS, exporting using READLINE_LIBS, exporting using LDFLAGS, nothing would work. If you set LIBS to anything, like "export LIBS="-lreadline", it causes configure to fail. The --help on configure about using LIBS seems to contradict what happens when you actually set it to any value.
Anyway, this works so I can live with it - but I don't particularly like it! :(
==== THE RIGHT SOLUTION ====
Well, wouldn't you know it. Spend hours trying to figure it out, then stumble onto the right tip on Google. Just needed to install ncurses-devel first. So, to summarize all that is needed to built it 'unkludgily':
yum install ncurses ncurses-devel
yum install readline readline-devel
yum install libtermcap libtermcap-devel
./configure
make
make install
No special command line options or exports or Makefile edits needed! Readline support is automatically built in by default now.

how to save time from compiling whole project?

I am implementing some idea on sqlite3. Every time I want to test my codes, I have to compile the whole project. The following is exactly what I do :
sudo make uninstall
sudo make clean
./configure
sudo make
sudo make install
some of above commands cost long time. What should I do to save time?
Skip other steps and do only
sudo make
sudo make install
after you changed some source codes.
Also, don't use sudo at all. You should be able to run an instance without actually "installing" it anywhere. This is what developers will normally do, rather than having to keep installing code they're working on into the very system they're using.
If you have a dual-core machine, use make -j2 to compile 2 files at a time in parallel. Quad core: make -j4, etc. This helps a lot if you make header file changes.
And listen to S.Mark: only do the steps you need to do each time. You probably won't need to run the slow ./configure again. If you run/link your tests against the sqlite in your build directory, you don't need make install either, leaving you with just make.
ccache might be your friend.
On Ubuntu (or similar systems), you start with apt-get install ccache and then before you compile, do PATH=/usr/lib/ccache:$PATH. It'll cache stuff in ~/.ccache and likely speed up subsequent compiles.

Resources