Autonomía digital y tecnológica

Código e ideas para una internet distribuida

Linkoteca. sysadmin


Regenerate initramfs for the New Kernel

To fix the issue, you need to regenerate the initramfs for the new kernel version. Run the following command in the terminal:

   sudo update-initramfs -u -k <version>

Replace <version> with the actual kernel version string for the kernel that you were unable to boot into. For example, it might look something like 4.15.0-36-generic.

You can find the kernel version by running uname -r if needed.

Update GRUB

Once the initramfs has been successfully generated, update the GRUB bootloader by running:

   sudo update-grub

This command ensures that GRUB recognizes the updated kernel and its corresponding initramfs.

I don’t use chroot, but the default setup for modern versions of FPM already compartmentalizes everything adequately for example, the private /tmp directory. I agree with others that chroot is an outdated way of doing things.

Also, I use SELinux…yet another way of achieving many of the same goals of chrooting. I’d highly recommend setting up SELinux if you are not already using it. If you’re concerned enough about security that you’d even think of chrooting php-fmp, you probably want to set up SELinux and have it on «Enforcing» (it’s useless on «Permissive» mode, that’s really only suitable for the configuration phase of test servers.) Not only will it provide security with PHP, but you get a whole bunch of other security benefits of it.

I have done some pretty sophisticated things with a web server under SELinux, requiring me to manually change a number of policies, and while I have had a few prolonged sessions of frustration, maybe 3-4 hours at a time of banging my head against the wall trying to get the permissions set up properly, it is totally worth it. It’s all up-front work, and once you learn how to do it it’s very easy.

root@www:~# apt -y install fcgiwrap

root@www:~# vi /etc/nginx/fcgiwrap.conf
# create new
# for example, enable CGI under [/cgi-bin]

location /cgi-bin/ {
gzip off;
root /var/www;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}

root@www:~# mkdir /var/www/cgi-bin

root@www:~# chmod 755 /var/www/cgi-bin
# add settings into [server] section of a site definition

root@www:~# vi /etc/nginx/sites-available/default

server {
.....
.....
include fcgiwrap.conf;
}

root@www:~# systemctl enable fcgiwrap

root@www:~# systemctl reload ngin

nmap is a network mapping tool. It works by sending various network messages to the IP addresses in the range we’re going to provide it with it. It can deduce a lot about the device it is probing by judging and interpreting the type of responses it gets.

Let’s kick off a simple scan with nmap. We’re going to use the -sn (scan no port) option. This tells nmap to not probe the ports on the devices for now. It will do a lightweight, quick scan.

Even so, it can take a little time for nmap to run. Of course, the more devices you have on the network, the longer it will take. It does all of its probing and reconnaissance work first and then presents its findings once the first phase is complete. Don’t be surprised when nothing visible happens for a minute or so.

The IP address we’re going to use is the one we obtained using the ip command earlier, but the final number is set to zero. That is the first possible IPAddress on this network. The «/24» tells nmap to scan the entire range of this network. The parameter «192.168.4.0/24» translates as «start at IP address 192.168.4.0 and work right through all IP addresses up to and including 192.168.4.255».

Note we are using sudo.

sudo nmap -sn 192.168.4.0/24

Enable HTTP/2 module
Apache’s HTTP/2 support comes from the mod_http2 module. Enable it from:

a2enmod http2
apachectl restart

If above commands do not work in your system (which is likely the case in CentOS/RHEL), use LoadModule directive in httpd configuration directory to enable http2 module.
Add HTTP/2 Support
We highly recommend you enable HTTPS support for your web site first. Most web browser simply do not support HTTP/2 over plain text. Besides, there are no excuses to not use HTTPS anymore. HTTP/2 can be enabled site-by-site basis. Locate your web site’s Apache virtual host configuration file, and add the following right after the opening tag:

Protocols h2 http/1.1

There two main paths to look for entries to autostart:

/etc/xdg/autostart – called system-wide and most of the application will place files when they are installed.
[user’s home]/.config/autostart – user’s applications to start when the user logs in .

There is a security problem here, which is sometimes installing a package will place an autostart file there because the maintainer decided it is important but the package might be just a dependency and the next time the user logs in unwanted program might execute and open ports!

System-wide configuration of the Debian X session consists mainly of options inside the /etc/X11/Xsession.options file, and scripts inside the /etc/X11/Xsession.d directory. These scripts are all dotted in by a single /bin/sh shell, in the order determined by sorting their names. Administrators may edit the scripts, though caution is advised if you are not comfortable with shell programming.

If we’re not running a full blown desktop environment like GNOME, KDE or XFCE, the chances of getting good font rendering in a post Xorg installation after a default linux base (think of arch or voidlinux) install is zero. This guide serves as a list of todo items to get decent font rendering with these sort of installs.

I used wget, which is available on any linux-ish system (I ran it on the same Ubuntu server that hosts the sites).

wget –mirror -p –html-extension –convert-links -e robots=off -P . http://url-to-site

That command doesn’t throttle the requests, so it could cause problems if the server has high load. Here’s what that line does:

–mirror: turns on recursion etc… rather than just downloading the single file at the root of the URL, it’ll now suck down the entire site.
-p: download all prerequisites (supporting media etc…) rather than just the html
–html-extension: this adds .html after the downloaded filename, to make sure it plays nicely on whatever system you’re going to view the archive on
–convert-links: rewrite the URLs in the downloaded html files, to point to the downloaded files rather than to the live site. this makes it nice and portable, with everything living in a self-contained directory.
-e robots=off: executes the «robots off» command, telling wget to ignore any directive to ignore the site in question. This is strictly Not a Good Thing To Do, but if you own the site, this is OK. If you don’t own the site being archived, you should obey all robots.txt files or you’ll be a Very Bad Person.
-P .: set the download directory to something. I left it at the default «.» (which means «here») but this is where you could pass in a directory path to tell wget to save the archived site. Handy, if you’re doing this on a regular basis (say, as a cron job or something…)
http://url-to-site: this is the full URL of the site to download. You’ll likely want to change this.

The Web ARChive (WARC) archive format specifies a method for combining multiple digital resources into an aggregate archive file together with related information. The WARC format is a revision of the Internet Archive’s ARC_IA File Format[4] that has traditionally been used to store «web crawls» as sequences of content blocks harvested from the World Wide Web. The WARC format generalizes the older format to better support the harvesting, access, and exchange needs of archiving organizations. Besides the primary content currently recorded, the revision accommodates related secondary content, such as assigned metadata, abbreviated duplicate detection events, and later-date transformations.

From the discussion about Working with ARCHIVE.ORG, we learn that it is important to save not just files but also HTTP headers.

To download a file and save the request and response data to a WARC file, run this:

wget «http://www.archiveteam.org/» –warc-file=»at»

This will download the file to index.html, but it will also create a file at-00000.warc.gz. This is a gzipped WARC file that contains the request and response headers (of the initial redirect and of the Wiki homepage) and the html data.

If you want to have an uncompressed WARC file, use the –no-warc-compression option:

wget «http://www.archiveteam.org/» –warc-file=»at» –no-warc-compression

When IA first started doing their thing, they came across a problem: how do you actually save all of the information related to a website as it existed at a point in time? IA wanted to capture it all, including headers, images, stylesheets, etc.

After a lot of revision the smart folks there built a specification for a file format named WARC, for Web ARCive. The details aren’t super important, but the gist is that it will preserve everything, including headers, in a verifiable, indexed, checksumed format.

I surf the web an awful lot, probably slightly more than your average 13 year old geek. I notice that a lot of sites load rather slowly mostly because your waiting on content from outside the specific domain. For example if you go to a website like thechive.com (one of my favorites) you will notice it takes quite a long time loading the ads. It would be nice if you could block advertisements… oh you can?

Although I mentioned thechive.com I spend most of my time on the net looking for information, not entertainment. These ads really hinder my search speed!

So here is a quick way you can block all the ads. Not only will your surfing be faster but you will also save some bandwidth.

First off I would like to thank the fine folks at http://winhelp2002.mvps.org/ for doing all the leg work and collecting all the data necessary for this to work.

PyPI packages not in the standard library:

  • virtualenv is a very popular tool that creates isolated Python environments for Python libraries. If you’re not familiar with this tool, I highly recommend learning it, as it is a very useful tool, and I’ll be making comparisons to it for the rest of this answer.

    It works by installing a bunch of files in a directory (eg: env/), and then modifying the PATH environment variable to prefix it with a custom bin directory (eg: env/bin/). An exact copy of the python or python3 binary is placed in this directory, but Python is programmed to look for libraries relative to its path first, in the environment directory. It’s not part of Python’s standard library, but is officially blessed by the PyPA (Python Packaging Authority). Once activated, you can install packages in the virtual environment using pip.

  • pyenv is used to isolate Python versions. For example, you may want to test your code against Python 2.7, 3.6, 3.7 and 3.8, so you’ll need a way to switch between them. Once activated, it prefixes the PATH environment variable with ~/.pyenv/shims, where there are special files matching the Python commands (python, pip). These are not copies of the Python-shipped commands; they are special scripts that decide on the fly which version of Python to run based on the PYENV_VERSION environment variable, or the .python-version file, or the ~/.pyenv/version file. pyenv also makes the process of downloading and installing multiple Python versions easier, using the command pyenv install.
  • pyenv-virtualenv is a plugin for pyenv by the same author as pyenv, to allow you to use pyenv and virtualenv at the same time conveniently. However, if you’re using Python 3.3 or later, pyenv-virtualenv will try to run python -m venv if it is available, instead of virtualenv. You can use virtualenv and pyenv together without pyenv-virtualenv, if you don’t want the convenience features.
  • virtualenvwrapper is a set of extensions to virtualenv (see docs). It gives you commands like mkvirtualenv, lssitepackages, and especially workon for switching between different virtualenv directories. This tool is especially useful if you want multiple virtualenv directories.
  • pyenv-virtualenvwrapper is a plugin for pyenv by the same author as pyenv, to conveniently integrate virtualenvwrapper into pyenv.
  • pipenv aims to combine Pipfile, pip and virtualenv into one command on the command-line. The virtualenv directory typically gets placed in ~/.local/share/virtualenvs/XXX, with XXX being a hash of the path of the project directory. This is different from virtualenv, where the directory is typically in the current working directory. pipenv is meant to be used when developing Python applications (as opposed to libraries). There are alternatives to pipenv, such as poetry, which I won’t list here since this question is only about the packages that are similarly named.

Standard library:

  • pyvenv is a script shipped with Python 3 but deprecated in Python 3.6 as it had problems (not to mention the confusing name). In Python 3.6+, the exact equivalent is python3 -m venv.
  • venv is a package shipped with Python 3, which you can run using python3 -m venv (although for some reason some distros separate it out into a separate distro package, such as python3-venv on Ubuntu/Debian). It serves the same purpose as virtualenv, but only has a subset of its features (see a comparison here). virtualenv continues to be more popular than venv, especially since the former supports both Python 2 and 3.

Recommendation for beginners:

This is my personal recommendation for beginners: start by learning virtualenv and pip, tools which work with both Python 2 and 3 and in a variety of situations, and pick up other tools once you start needing them.

The script would:
– extract all file versions to /tmp/all_versions_exported
– take 1 argument – relative path to the file inside git repo
– give result filenames numeric prefix (sortable)
– mention inspected filename in result files (to tell apples apart from oranges:)
– mention commit date in the result filename (see output example below)
– not create empty result files

1. Update the repository index by executing the below command in Terminal:
$ sudo apt-get update

2. Next, execute the below command to clean out the local repository:
$ sudo apt-get clean

3. Execute the below command to remove all the unnecessary packages that are no longer needed:
$ sudo apt-get autoremove

The above command will display the unmet dependencies or broken package’s name.

ncftpput -R -v -u «username» ftp.nixcraft.biz /nixcraft/forum /tmp/phpbb

Where,

-u «username» : Ftp server username
-v : Verbose i.e. show upload progress
-R : Recursive mode; copy whole directory trees.
ftp.nixcraft.biz : Remote ftp server (use FQDN or IP).
/nixcraft/forum : Remote ftp server directory where all files and subdirectories will be uploaded.
/tmp/phpbb : Local directory (or list of files) to upload remote ftp server directory /nixcraft/forum