What is the difference between docker and docker-compose
docker manages single containers
docker-compose manages multiple container applications
docker manages single containers
docker-compose manages multiple container applications
changeme picks up where commercial scanners leave off. It focuses on detecting default and backdoor credentials and not necessarily common credentials. It’s default mode is to scan HTTP default credentials, but has support for other credentials.
changeme is designed to be simple to add new credentials without having to write any code or modules. changeme keeps credential data separate from code. All credentials are stored in yaml files so they can be both easily read by humans and processed by changeme. Credential files can be created by using the ./changeme.py --mkcred tool and answering a few questions.
The main repositories now contain both PHP 5.6, PHP 7.0-7.4 and PHP 8.0-8.3 coinstallable together.
sudo certbot --nginx -d example.com -d www.example.com
Regenerate initramfs for the New Kernel
To fix the issue, you need to regenerate the initramfs for the new kernel version. Run the following command in the terminal:
sudo update-initramfs -u -k <version>
Replace <version> with the actual kernel version string for the kernel that you were unable to boot into. For example, it might look something like 4.15.0-36-generic.
You can find the kernel version by running uname -r if needed.
Update GRUB
Once the initramfs has been successfully generated, update the GRUB bootloader by running:
sudo update-grub
This command ensures that GRUB recognizes the updated kernel and its corresponding initramfs.
I don’t use chroot, but the default setup for modern versions of FPM already compartmentalizes everything adequately for example, the private /tmp directory. I agree with others that chroot is an outdated way of doing things.
Also, I use SELinux…yet another way of achieving many of the same goals of chrooting. I’d highly recommend setting up SELinux if you are not already using it. If you’re concerned enough about security that you’d even think of chrooting php-fmp, you probably want to set up SELinux and have it on «Enforcing» (it’s useless on «Permissive» mode, that’s really only suitable for the configuration phase of test servers.) Not only will it provide security with PHP, but you get a whole bunch of other security benefits of it.
I have done some pretty sophisticated things with a web server under SELinux, requiring me to manually change a number of policies, and while I have had a few prolonged sessions of frustration, maybe 3-4 hours at a time of banging my head against the wall trying to get the permissions set up properly, it is totally worth it. It’s all up-front work, and once you learn how to do it it’s very easy.
Wordfence CLI is an open source, high performance, multi-process security scanner, written in Python, that quickly scans network filesystems to detect PHP/other malware and WordPress vulnerabilities. CLI is parallelizable, can be scheduled, can accept input via pipe, and can pipe output to other commands.
A TPM, or Trusted Platform Module, is a security chip that can be embedded in a laptop or plugged into most desktop PCs. It’s basically a lockbox for keys, as well as an encryption device a PC can use to boost its security.
This is a small nginx configuration that should help you get your own Matomo instance running and start collecting your own analytics.
root@www:~# apt -y install fcgiwrap
root@www:~# vi /etc/nginx/fcgiwrap.conf
# create new
# for example, enable CGI under [/cgi-bin]
location /cgi-bin/ {
gzip off;
root /var/www;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
root@www:~# mkdir /var/www/cgi-bin
root@www:~# chmod 755 /var/www/cgi-bin
# add settings into [server] section of a site definition
root@www:~# vi /etc/nginx/sites-available/default
server {
.....
.....
include fcgiwrap.conf;
}
root@www:~# systemctl enable fcgiwrap
root@www:~# systemctl reload ngin
The command to use depends on which distribution of Linux you’re using. For Debian based Linux distributions, the command is deluser, and for the rest of the Linux world, it is userdel.
sudo deluser –remove-home USERNAME
Node Version Manager – POSIX-compliant bash script to manage multiple active node.js versions
nmap is a network mapping tool. It works by sending various network messages to the IP addresses in the range we’re going to provide it with it. It can deduce a lot about the device it is probing by judging and interpreting the type of responses it gets.
Let’s kick off a simple scan with nmap. We’re going to use the -sn (scan no port) option. This tells nmap to not probe the ports on the devices for now. It will do a lightweight, quick scan.
Even so, it can take a little time for nmap to run. Of course, the more devices you have on the network, the longer it will take. It does all of its probing and reconnaissance work first and then presents its findings once the first phase is complete. Don’t be surprised when nothing visible happens for a minute or so.
The IP address we’re going to use is the one we obtained using the ip command earlier, but the final number is set to zero. That is the first possible IPAddress on this network. The «/24» tells nmap to scan the entire range of this network. The parameter «192.168.4.0/24» translates as «start at IP address 192.168.4.0 and work right through all IP addresses up to and including 192.168.4.255».
Note we are using sudo.
sudo nmap -sn 192.168.4.0/24
To list all the service unit files which are currently in enabled state use –state=enabled
# systemctl list-unit-files --type=service --state=enabled
change the directory’s ACL to give the group write permissions and to make these permissions inherited by newly created files. Under Linux:
setfacl -d -m group:GROUPNAME:rwx /path/to/directory setfacl -m group:GROUPNAME:rwx /path/to/directory
Enable HTTP/2 module
Apache’s HTTP/2 support comes from the mod_http2 module. Enable it from:
a2enmod http2
apachectl restart
If above commands do not work in your system (which is likely the case in CentOS/RHEL), use LoadModule directive in httpd configuration directory to enable http2 module.
Add HTTP/2 Support
We highly recommend you enable HTTPS support for your web site first. Most web browser simply do not support HTTP/2 over plain text. Besides, there are no excuses to not use HTTPS anymore. HTTP/2 can be enabled site-by-site basis. Locate your web site’s Apache virtual host configuration file, and add the following right after the opening
Protocols h2 http/1.1
…start off by writing “@reboot.” The reboot command is key here as it tells the cron on reboot this command to run every single time. Directly after reboot, add the full file path to the bash script.
@reboot /home/derrik/startupscript.sh
There two main paths to look for entries to autostart:
/etc/xdg/autostart – called system-wide and most of the application will place files when they are installed.
[user’s home]/.config/autostart – user’s applications to start when the user logs in .
There is a security problem here, which is sometimes installing a package will place an autostart file there because the maintainer decided it is important but the package might be just a dependency and the next time the user logs in unwanted program might execute and open ports!
Use the –user=
System-wide configuration of the Debian X session consists mainly of options inside the /etc/X11/Xsession.options file, and scripts inside the /etc/X11/Xsession.d directory. These scripts are all dotted in by a single /bin/sh shell, in the order determined by sorting their names. Administrators may edit the scripts, though caution is advised if you are not comfortable with shell programming.
If we’re not running a full blown desktop environment like GNOME, KDE or XFCE, the chances of getting good font rendering in a post Xorg installation after a default linux base (think of arch or voidlinux) install is zero. This guide serves as a list of todo items to get decent font rendering with these sort of installs.
Open the terminal and copy paste this command:
echo $XDG_CURRENT_DESKTOP
…simply type screenfetch in the terminal and it should show the desktop environment version along with other system information.

evo x weight loss buy letrozole uk 5 exercises for triceps with dumbbells
netstat -lt
I used wget, which is available on any linux-ish system (I ran it on the same Ubuntu server that hosts the sites).
wget –mirror -p –html-extension –convert-links -e robots=off -P . http://url-to-site
That command doesn’t throttle the requests, so it could cause problems if the server has high load. Here’s what that line does:
–mirror: turns on recursion etc… rather than just downloading the single file at the root of the URL, it’ll now suck down the entire site.
-p: download all prerequisites (supporting media etc…) rather than just the html
–html-extension: this adds .html after the downloaded filename, to make sure it plays nicely on whatever system you’re going to view the archive on
–convert-links: rewrite the URLs in the downloaded html files, to point to the downloaded files rather than to the live site. this makes it nice and portable, with everything living in a self-contained directory.
-e robots=off: executes the «robots off» command, telling wget to ignore any directive to ignore the site in question. This is strictly Not a Good Thing To Do, but if you own the site, this is OK. If you don’t own the site being archived, you should obey all robots.txt files or you’ll be a Very Bad Person.
-P .: set the download directory to something. I left it at the default «.» (which means «here») but this is where you could pass in a directory path to tell wget to save the archived site. Handy, if you’re doing this on a regular basis (say, as a cron job or something…)
http://url-to-site: this is the full URL of the site to download. You’ll likely want to change this.
The Web ARChive (WARC) archive format specifies a method for combining multiple digital resources into an aggregate archive file together with related information. The WARC format is a revision of the Internet Archive’s ARC_IA File Format[4] that has traditionally been used to store «web crawls» as sequences of content blocks harvested from the World Wide Web. The WARC format generalizes the older format to better support the harvesting, access, and exchange needs of archiving organizations. Besides the primary content currently recorded, the revision accommodates related secondary content, such as assigned metadata, abbreviated duplicate detection events, and later-date transformations.
From the discussion about Working with ARCHIVE.ORG, we learn that it is important to save not just files but also HTTP headers.
To download a file and save the request and response data to a WARC file, run this:
wget «http://www.archiveteam.org/» –warc-file=»at»
This will download the file to index.html, but it will also create a file at-00000.warc.gz. This is a gzipped WARC file that contains the request and response headers (of the initial redirect and of the Wiki homepage) and the html data.
If you want to have an uncompressed WARC file, use the –no-warc-compression option:
wget «http://www.archiveteam.org/» –warc-file=»at» –no-warc-compression
When IA first started doing their thing, they came across a problem: how do you actually save all of the information related to a website as it existed at a point in time? IA wanted to capture it all, including headers, images, stylesheets, etc.
After a lot of revision the smart folks there built a specification for a file format named WARC, for Web ARCive. The details aren’t super important, but the gist is that it will preserve everything, including headers, in a verifiable, indexed, checksumed format.
wget –recursive –convert-links -mpck –html-extension –user-agent=»Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.146 Safari/537.36.» -e robots=off site.com
We are operating the following DNS resolvers. All our resolvers can be used free of charge.
The resolvers are alive since 2014 and the project remains maintained.
84.200.69.80
resolver1.dns.watch
No Logging, DNSSEC enabled
84.200.70.40
resolver2.dns.watch
No Logging, DNSSEC enabled
I run a shell script on my laptop to block ads, trackers, and malicious websites at the DNS host level. I also use 1.1.1.1 as the DNS resolver on my laptop and phone. This article describes why, alternatives, and trade-offs.
$ cat /etc/resolv.conf
I surf the web an awful lot, probably slightly more than your average 13 year old geek. I notice that a lot of sites load rather slowly mostly because your waiting on content from outside the specific domain. For example if you go to a website like thechive.com (one of my favorites) you will notice it takes quite a long time loading the ads. It would be nice if you could block advertisements… oh you can?
Although I mentioned thechive.com I spend most of my time on the net looking for information, not entertainment. These ads really hinder my search speed!
So here is a quick way you can block all the ads. Not only will your surfing be faster but you will also save some bandwidth.
First off I would like to thank the fine folks at http://winhelp2002.mvps.org/ for doing all the leg work and collecting all the data necessary for this to work.
A list of Free Software network services and web applications which can be hosted on your own servers
virtualenv is a very popular tool that creates isolated Python environments for Python libraries. If you’re not familiar with this tool, I highly recommend learning it, as it is a very useful tool, and I’ll be making comparisons to it for the rest of this answer.
It works by installing a bunch of files in a directory (eg: env/), and then modifying the PATH environment variable to prefix it with a custom bin directory (eg: env/bin/). An exact copy of the python or python3 binary is placed in this directory, but Python is programmed to look for libraries relative to its path first, in the environment directory. It’s not part of Python’s standard library, but is officially blessed by the PyPA (Python Packaging Authority). Once activated, you can install packages in the virtual environment using pip.
pyenv is used to isolate Python versions. For example, you may want to test your code against Python 2.7, 3.6, 3.7 and 3.8, so you’ll need a way to switch between them. Once activated, it prefixes the PATH environment variable with ~/.pyenv/shims, where there are special files matching the Python commands (python, pip). These are not copies of the Python-shipped commands; they are special scripts that decide on the fly which version of Python to run based on the PYENV_VERSION environment variable, or the .python-version file, or the ~/.pyenv/version file. pyenv also makes the process of downloading and installing multiple Python versions easier, using the command pyenv install.pyenv-virtualenv is a plugin for pyenv by the same author as pyenv, to allow you to use pyenv and virtualenv at the same time conveniently. However, if you’re using Python 3.3 or later, pyenv-virtualenv will try to run python -m venv if it is available, instead of virtualenv. You can use virtualenv and pyenv together without pyenv-virtualenv, if you don’t want the convenience features.virtualenvwrapper is a set of extensions to virtualenv (see docs). It gives you commands like mkvirtualenv, lssitepackages, and especially workon for switching between different virtualenv directories. This tool is especially useful if you want multiple virtualenv directories.pyenv-virtualenvwrapper is a plugin for pyenv by the same author as pyenv, to conveniently integrate virtualenvwrapper into pyenv.pipenv aims to combine Pipfile, pip and virtualenv into one command on the command-line. The virtualenv directory typically gets placed in ~/.local/share/virtualenvs/XXX, with XXX being a hash of the path of the project directory. This is different from virtualenv, where the directory is typically in the current working directory. pipenv is meant to be used when developing Python applications (as opposed to libraries). There are alternatives to pipenv, such as poetry, which I won’t list here since this question is only about the packages that are similarly named.pyvenv is a script shipped with Python 3 but deprecated in Python 3.6 as it had problems (not to mention the confusing name). In Python 3.6+, the exact equivalent is python3 -m venv.venv is a package shipped with Python 3, which you can run using python3 -m venv (although for some reason some distros separate it out into a separate distro package, such as python3-venv on Ubuntu/Debian). It serves the same purpose as virtualenv, but only has a subset of its features (see a comparison here). virtualenv continues to be more popular than venv, especially since the former supports both Python 2 and 3.This is my personal recommendation for beginners: start by learning virtualenv and pip, tools which work with both Python 2 and 3 and in a variety of situations, and pick up other tools once you start needing them.
The script would:
– extract all file versions to /tmp/all_versions_exported
– take 1 argument – relative path to the file inside git repo
– give result filenames numeric prefix (sortable)
– mention inspected filename in result files (to tell apples apart from oranges:)
– mention commit date in the result filename (see output example below)
– not create empty result files
Automatic Syncing With SSH Keys
#!/usr/bin/env bash
rsync -az –delete /home/kevin/source/ server.example.com:/home/kevin/destination
Copy/Sync a File on a Local Computer
[root@tecmint]# rsync -zvh backup.tar /tmp/backups/
Copy/Sync a Directory on Local Computer
[root@tecmint]# rsync -avzh /root/rpmpkgs /tmp/backups/
Clear the APT cache to reclaim disk space used by the downloaded packages.
1. Update the repository index by executing the below command in Terminal:
$ sudo apt-get update
2. Next, execute the below command to clean out the local repository:
$ sudo apt-get clean
3. Execute the below command to remove all the unnecessary packages that are no longer needed:
$ sudo apt-get autoremove
The above command will display the unmet dependencies or broken package’s name.
DevOps is the combination of application development and operations, which minimizes or eliminates the disconnect between software developers who build applications and systems administrators who keep infrastructure running.
To recursively give directories read&execute privileges:
find /path/to/base/dir -type d -exec chmod 755 {} +
To recursively give files read privileges:
find /path/to/base/dir -type f -exec chmod 644 {} +