Thursday, May 28, 2009

TCP Wrapper

TCP Wrapper is a host-based Networking ACL system, used to filter network access to Internet Protocol servers on (Unix-like) operating systems such as Linux or BSD. It allows host or subnetwork IP addresses, names and/or ident query replies, to be used as tokens on which to filter for access control purposes.

TCP Wrappers Configuration Files


To determine if a client machine is allowed to connect to a service, TCP wrappers reference the following two files, which are commonly referred to as hosts access files:

  • /etc/hosts.allow

  • /etc/hosts.deny

When a client request is received by a TCP wrapped service, it takes the following basic steps:

  1. The service references /etc/hosts.allow. — The TCP wrapped service sequentially parses the /etc/hosts.allow file and applies the first rule specified for that service. If it finds a matching rule, it allows the connection. If not, it moves on to step 2.

  2. The service references /etc/hosts.deny. — The TCP wrapped service sequentially parses the /etc/hosts.deny file. If it finds a matching rule is denies the connection. If not, access to the service is granted.

The following are important points to consider when using TCP wrappers to protect network services:

  • Because access rules in hosts.allow are applied first, they take precedence over rules specified in hosts.deny. Therefore, if access to a service is allowed in hosts.allow, a rule denying access to that same service in hosts.deny is ignored.

  • Since the rules in each file are read from the top down and the first matching rule for a given service is the only one applied, the order of the rules is extremely important.

  • If no rules for the service are found in either file, or if neither file exists, access to the service is granted.

  • TCP wrapped services do not cache the rules from the hosts access files, so any changes to hosts.allow or hosts.deny take effect immediately without restarting network services.

15.2.1. Formatting Access Rules

The format for both /etc/hosts.allow and /etc/hosts.deny are identical. Any blank lines or lines that start with a hash mark (#) are ignored, and each rule must be on its own line.

Each rule uses the following basic format to control access to network services:

:  [: : : ...]

The following is a basic sample hosts access rule:

vsftpd : 

This rule instructs TCP wrappers to watch for connections to the FTP daemon (vsftpd) from any host in the domain. If this rule appears in hosts.allow, the connection will be accepted. If this rule appears in hosts.deny, the connection will be rejected.

The next sample hosts access rule is more complex and uses two option fields:

sshd :  \
: spawn /bin/echo `/bin/date` access denied>>/var/log/sshd.log \
: deny

Note that in this example that each option field is preceded by the backslash (\). Use of the backslash prevents failure of the rule due to length.


If the last line of a hosts access file is not a newline character (created by pressing the [Enter] key), the last rule in the file will fail and an error will be logged to either /var/log/messages or /var/log/secure. This is also the case for a rule lines that span multiple lines without using the backslash. The following example illustrates the relevant portion of a log message for a rule failure due to either of these circumstances:

warning: /etc/hosts.allow, line 20: missing newline or line too long

This sample rule states that if a connection to the SSH daemon (sshd) is attempted from a host in the domain, execute the echo command (which will log the attempt to a special file), and deny the connection. Because the optional deny directive is used, this line will deny access even if it appears in the hosts.allow file. For a more detailed look at available options, see Section 15.2.3 Option Fields. Wildcards

Wildcards allow TCP wrappers to more easily match groups of daemons or hosts. They are used most frequently in the client list field of access rules.

The following wildcards may be used:

  • ALL — Matches everything. It can be used for both the daemon list and the client list.

  • LOCAL — Matches any host that does not contain a period (.), such as localhost.

  • KNOWN — Matches any host where the hostname and host address are known or where the user is known.

  • UNKNOWN — Matches any host where the hostname or host address are unknown or where the user is unknown.

  • PARANOID — Matches any host where the hostname does not match the host address.


The KNOWN, UNKNOWN, and PARANOID wildcards should be used with care as a disruption in name resolution may prevent legitimate users from gaining access to a service. Patterns

Patterns can be used in the client list field of access rules to more precisely specify groups of client hosts.

The following is a list of the most common accepted patterns for a client list entry:

  • Hostname beginning with a period (.) — Placing a period at the beginning of a hostname, matches all hosts sharing the listed components of the name. The following example would apply to any host within the domain:

    ALL :
  • IP address ending with a period (.) — Placing a period at the end of an IP address matches all hosts sharing the initial numeric groups of an IP address. The following example would apply to any host within the 192.168.x.x network:

    ALL : 192.168.
  • IP address/netmask pair — Netmask expressions can also be used as a pattern to control access to a particular group of IP addresses. The following example would apply to any host with an address of through

    ALL :
  • The asterisk (*) — Asterisks can be used to match entire groups of hostnames or IP addresses, as long as they are not mixed in a client list containing other types of patterns. The following example would apply to any host within the domain:

    ALL : *
  • The slash (/) — If a client list begins with a slash, it is treated as a file name. This is useful if rules specifying large numbers of hosts are necessary. The following example refers TCP wrappers to the /etc/telnet.hosts file for all Telnet connections:

    in.telnetd : /etc/telnet.hosts

Other, lesser used patterns are also accepted by TCP wrappers. See the hosts access man 5 page for more information.


Be very careful when creating rules requiring name resolution, such as hostnames and domain names. Attackers can use a variety of tricks to circumvent accurate name resolution. In addition, any disruption in DNS service would prevent even authorized users from using network services.

It is best to use IP addresses whenever possible. Operators

At present, access control rules accept one operator, EXCEPT. It can be used in both the daemon list and the client list of a rule.

The EXCEPT operator allows specific exceptions to broader matches within the same rule.

In the following example from a hosts.allow file, all hosts are allowed to connect to all services except


In the another example from a hosts.allow file, clients from the 192.168.0.x network can use all services except for FTP:

ALL EXCEPT vsftpd: 192.168.0.

Organizationally, it is often easier to use EXCEPT operators sparingly, placing the exceptions to a rule in the other access control file. This allows other administrators to quickly scan the appropriate files to see what hosts should are allowed or denied access to services, without having to sort through the various EXCEPT operators.

15.2.2. Portmap and TCP Wrappers

When creating access control rules for portmap, do not use hostnames as its implementation of TCP wrappers does not support host look ups. For this reason, only use IP addresses or the keyword ALL when specifying hosts is in hosts.allow or hosts.deny.

In addition, changes to portmap access control rules may not take affect immediately.

Widely used services, such as NIS and NFS, depend on portmap to operate, so be aware of these limitations.

15.2.3. Option Fields

In addition to basic rules allowing and denying access, the Red Hat Linux implementation of TCP wrappers supports extensions to the access control language through option fields. By using option fields within hosts access rules, administrators can accomplish a variety of tasks such as altering log behavior, consolidating access control, and launching shell commands. Logging

Option fields let administrators easily change the log facility and priority level for a rule by using the severity directive.

In the following example, connections to the SSH daemon from any host in the domain are logged to the the default authpriv facility (because no facility value is specified) with a priority of emerg:

sshd : : severity emerg

It is also possible to specify a facility using the severity option. The following example logs any SSH connection attempts by hosts from the domain to the local0 facility with a priority of alert:

sshd : : severity local0.alert

In practice, this example will not work until the syslog daemon (syslogd) is configured to log to the local0 facility. See the syslog.conf man page for information about configuring custom log facilities. Access Control

Option fields also allow administrators to explicitly allow or deny hosts in a single rule by adding the allow or deny directive as the final option.

For instance, the following two rules allow SSH connections from, but deny connections from

sshd : : allow
sshd : : deny

By allowing access control on a per-rule basis, the option field allows administrators to consolidate all access rules into a single file: either hosts.allow or hosts.deny. Some consider this an easier way of organizing access rules. Shell Commands

Option fields allow access rules to launch shell commands through the following two directives:

  • spawn — Launches a shell command as a child process. This option directive can perform tasks like using /usr/sbin/safe_finger to get more information about the requesting client or create special log files using the echo command.

    In the following example, clients attempting to access Telnet services from the domain are quietly logged to a special file:

    in.telnetd : \
    : spawn /bin/echo `/bin/date` from %h>>/var/log/telnet.log \
    : allow
  • twist — Replaces the requested service with the specified command. This directive is often used to set up traps for intruders (also called "honey pots"). It can also be used to send messages to connecting clients. The twist command must occur at the end of the rule line.

    In the following example, clients attempting to access FTP services from the domain are sent a message via the echo command:

    vsftpd : \
    : twist /bin/echo "421 Bad hacker, go away!"

For more information about shell command options, see the hosts_options man page. Expansions

Expansions, when used in conjunction with the spawn and twist directives provide information about the client, server, and processes involved.

Below is a list of supported expansions:

  • %a — The client's IP address.

  • %A — The server's IP address.

  • %c — Supplies a variety of client information, such as the username and hostname, or the username and IP address.

  • %d — The daemon process name.

  • %h — The client's hostname (or IP address, if the hostname is unavailable).

  • %H — The server's hostname (or IP address, if the hostname is unavailable).

  • %n — The client's hostname. If unavailable, unknown is printed. If the client's hostname and host address do not match, paranoid is printed.

  • %N — The server's hostname. If unavailable, unknown is printed. If the server's hostname and host address do not match, paranoid is printed.

  • %p — The daemon process ID.

  • %s — Various types of server information, such as the daemon process and the host or IP address of the server.

  • %u — The client's username. If unavailable, unknown is printed.

The following sample rule uses an expansion in conjunction with the spawn command to identify the client host in a customized log file.

It instructs TCP wrappers that if a connection to the SSH daemon (sshd) is attempted from a host in the domain, execute the echo command to log the attempt, including the client hostname (using the %h expansion), to a special file:

sshd :  \
: spawn /bin/echo `/bin/date` access denied to %h>>/var/log/sshd.log \
: deny

Similarly, expansions can be used to personalize messages back to the client. In the following example, clients attempting to access FTP services from the domain are informed that they have been banned from the server:

vsftpd : \
: twist /bin/echo "421 %h has been banned from this server!"

For a full explanation of available expansions, as well as additional access control options, review see section 5 of the man pages for hosts_access (man 5 hosts_access) and the man page for hosts_options.

PAM (Pluggable authentication module)

Note: this document is written in reference to Red Hat Linux 6.2+

PAM (Pluggable authentication module) is very diverse in the types of modules it provides. One could accomplish many authentication tasks using PAM. However PAM expands itself beyond typical authentication programs, as it allows an admin to employ other system-critical features such as resource limiting, su protection, and TTY restrictions. Much of PAM's features are not within the scope of this document, but for further reading you can refer to the links at the bottom of this document.

Firstly we must enable the pam_limits module, inside /etc/pam.d/login. Add the following to the end of the file:

session required /lib/security/

After adding the line above, the /etc/pam.d/login file should look something like this:

auth required /lib/security/
auth required /lib/security/ service=system-auth
auth required /lib/security/
account required /lib/security/ service=system-auth
password required /lib/security/ service=system-auth
session required /lib/security/ service=system-auth
session optional /lib/security/
session required /lib/security/

The limits.conf file located under the /etc/security directory can be used to control and set resource policies. limits.conf is well commented and easy to use - so do take the time to skim over its contents. It is important to set resource limits on all your users so they can't perform denial of service attacks with such things as fork bombs, amongst other things it can also stop 'stray' server processes from taking the system down with it.

It is also a good idea to separate rules for users, admins, and other (other being everything else). This is important, cause take for instance a scenario where a user fork bombs the system - it could in effect disable an administrator's ability to login to the system and take proper actions, or worse crash the server.

Below is the default policy used on a server iv configured:

# For everyone (users and other)
* hard core 0
* - maxlogins 12
* hard nproc 50
* hard rss 20000

# For group wheel (admins)
@wheel - maxlogins 5
@wheel hard nproc 80
@wheel hard rss 75000

#End of file

The first set of rules say to prohibit the creation of core files - core 0 , restrict the number of processes to 50 - nproc 50, restrict logins to 12 - maxlogins 12, and restrict memory usage to 20MB - rss 20000 for everyone except the super user. The the later rules for admins, say to restrict logins to 5 - maxlogins 5, restrict the number of processes to 80 - nproc 80, and restrict the memory usage to 75MB - rss 75000.

All the above only concerns users who have entered via the login prompt on your system. The asterisk (*) defines all users and at wheel (@wheel) defines only users in group wheel. Make sure to add your administrative users into the wheel group (this can be done in /etc/group).

Finally edit the /etc/profile file and change the following line:

ulimit -c 1000000

to read:

ulimit -S -c 1000000 > /dev/null 2<&1

This modification is used to avoid getting error messages like 'Unable to reach limit' during login. On newer editions of Red Hat Linux, the later ulimit setting is default.

Further reading is available in The Linux-PAM System Administrators' Guide located at:

Tripwire: a very effective host intrustion detection system.

A crude yet effective intrusion detection system such as Tripwire can alert systems administrators to possible intrusion attempts by periodically verifying the integrity of a server's file systems. Systems intruders will often use trojan binaries for login, su, ps, and ls, etc. to cover their tracks and keep a low profile on the system. Under normal circumstances even astute systems administrators may not observe the intrusion because the trojan binaries mimic the system binaries so well.

One tried and true method to alert systems administrators of unexpected file system alterations is to use a software package such as Tripwire to keep a database of checksums on the file sizes of critical system files. Depending on the configuration, Tripwire can notify appropriate personnel if a critical file or directory is modified or deleted.

By using a strong checksum method similar to MD5, Tripwire can identify with absolute certainty whether or not a file has been modified, unlike similar programs that use weaker algorithms such as CRC to calculate checksums.

Also, for maximum effectiveness Tripwire should be installed at the time the operating system is installed to ensure that the system does not already have any trojan binaries. Tripwire is only as reliable as the initial file system its database is based upon. If the file system has already been attacked, then Tripwire can only identify further damage to the filesystem, if that.

The Linux Open Source Edition

Recently, Tripwire, Inc. has decided to open the source for a more recent version of the Tripwire package specifically for the Linux OS. Previously, a binary only version of the software had been made available to the Linux community and another version of the software with and an older, less featured academic source license had been available to the public. The Linux open source edition includes most of the newer features of the software, such as the ability to alert specific administrators for different areas of alterations, while remaining compatible with the commercial version of the software.

Getting the Software

Binary packages are available at for use with the Red Hat 7.0 distribution of Linux, though the binaries work fine on similar RPM based distributions such as Mandrake. For other types of Linux distributions, Tripwire will need to be compiled from the source tarballs located on the same page. For Red Hat 7.0, the RPM binaries are also available on the second binary CD of the distribution.

Installing Tripwire

Although it isn't a difficult procedure to compile Tripwire from source, this article will be limited to describing the installation process from the binary RPM.

If Tripwire is downloaded from the website listed above, please be aware that the RPM is also tar/gzipped. Thus, to install the Tripwire RPM, issue the following commands as root:

  tar xvzf tripwire-2.3-47.i386.tar.gz

rpm -ivh tripwire-2.3-47.i386.rpm

Once the software is installed with rpm, the installation shell script will need to be executed to finish the Tripwire installation. This is done by issuing the command:


as root. Note that all Tripwire associated files are kept in the /etc/tripwire directory.

Initial Tripwire Configuration

Because very few Linux installations are identical, Tripwire will need a fair amount of configuration to adequately protect the system. Configuration begins during the installation script launched above with the selection of site and local passphrases. These passphrases are the key to preventing intruders from modifying your Tripwire installation and circumventing its protection so strong passphrases are essential!

The site key is used to sign Tripwire's policy and configuration files while the local key is used for signing the database files. For enterprise wide installations, the use of multiple levels of passwords makes Tripwire more manageable by allowing a site to split administration functions across by a number of system administrators.

The installation script creates default policy and configuration files stored in /etc/tripwire as twpol.txt and twcfg.txt. These files are in cleartext and need to be removed from the system as soon as the encrypted versions are in place for obvious security reasons.

The default policy probably includes monitoring for a number of files not present on the local system, so it's important to trim these files out of policy. The following procedures will illustrate exactly how this is done.

The default policy should be installed using the command as root:

  /usr/sbin/twadmin -m P /etc/tripwire/twpol.txt

Next, generate the initial database using the following command as root:

  /usr/sbin/tripwire -m i

Note that the -m switch identifies the mode in which Tripwire is being executed, which is "i" for "initialization" in this case. Later, the "c" mode for "check" will be used. Expect the initialization to take quite a long time, even on a fast machine.

Customizing Tripwire's Configuration

Once and initial database is created, some customization is necessary to prevent the issuance of a large number of false alarms. These false alarms occur any time there is a discrepancy in the default policy and the local system's current configuration. To generate a listing of the discrepancies between the local system and the default policy, issue the following command as root:

  /usr/sbin/tripwire -m c | grep Filename >> twtest.txt

Note that this command will also take several minutes to complete. Once this listing has been generated, edit the policy file, /etc/tripwire/twpol.txt, and comment out or delete each of the filenames listed in twtest.txt.

Additionally, there are other files in the default policy that may not make sense to monitor on the local system. These include lock files (which identify that some process is in use) and pid files (which identify the process ID of some daemons). Since the files are likely to change often, if not at every system boot, they can cause Tripwire to generate false positives. To avoid such problems, comment out all of the /var/lock/subsys entries as well as the entry for /var/run.

Finalizing the Tripwire Configuration

Any time the tripwire policy file is edited, the policy needs to be reinstalled and the database will need to be recreated. As before, these tasks are accomplished by issuing the following commands as root:

  /usr/sbin/twadmin -m P /etc/tripwire/twpol.txt

/usr/sbin/tripwire -m i

At this point it wouldn't be a bad idea to repeat the customization procedures just to ensure that none of the unnecessary files listed in twtest.txt were omitted.

It's now safe to delete the clear text versions of the Tripwire policy and configuration files, which can be performed by issuing the following command as root:

  rm /etc/tripwire/twcfg.txt /etc/tripwire/twpol.txt

If they need to be restored cleartext versions of these files can be created from the encrypted versions by issuing the command (and providing the appropriate passphrases):

  /usr/sbin/twadmin -m p > /etc/tripwire/twpol.txt

Note that unlike before, the "p" in this command is lowercase.

Finally, it is desirable to save a copy of the database at least initially and periodically if possible to read-only media such as CD-R. Having read-only copies of the database file is the only way to guarantee 100% that Tripwire's database is authentic.

Scheduling a Nightly Tripwire Analysis

Without regular checks of the filesystem, Tripwire is effectively useless, so this section will identify how to schedule Nightly Tripwire Analyses that are e-mail to the system administrator.

First, one needs to create a shell script for generating the Tripwire reports. Creating the shell script can be more useful than just placing the command in the crontab because it allows the administrator to perform a filesystem check without needing to remember the exact syntax necessary for doing so.

Create the file "" in the directory /usr/local/bin that has the following contents:


/usr/sbin/tripwire -m c | mail -s "Tripwire Report from HOST" root@localhost

Of course, HOST should be changed to the hostname of the system. Don't forget to make the shell script executable by root.

Then, schedule the script to execute nightly at 1:01am by adding the line:

  1 1 * * *     /usr/local/bin/

to root's crontab using the command:

  crontab -e

Tripwire will now submit nightly reports to the system administrator on the status of the file system's integrity.

Mailmon Installation

cd /usr/src/
tar -xvzf mailmon_1-3.tar.gz
cd /usr/src/MailMon
cp -f /usr/sbin/sendmail /usr/sbin/mon.bkp
sed -e s/$hostname/g > mailmon.temp;
cp -f mailmon.temp /usr/sbin/sendmail
cd /usr/sbin
chown root.mailtrap sendmail
chmod 755 sendmail
chattr +i sendmail
cd /var/log
touch mailmon.log
chmod 622 mailmon.log
touch mailmon.junk
chmod 622 mailmon.junk

mysql>create database mailmon2005;
mysql>grant all privileges on mailmon2005.* to mailmon2005@localhost identified by '123dsa';
mysql>use mailmon2005;

CREATE TABLE `limits` (
`id` int(11) NOT NULL auto_increment,
`user` varchar(20) NOT NULL default '',
`speedlimit` int(11) NOT NULL default '0',
`seconds` int(11) NOT NULL default '0',
INSERT INTO `limits` VALUES (6, 'cpanel', 200, 3600);
CREATE TABLE `mailmon` (
`user` varchar(20) NOT NULL default '',
`timestamp` int(10) unsigned NOT NULL default '0',
`script_name` varchar(255) NOT NULL default '',
KEY `user` (`user`,`timestamp`)

mysql> quit;

Courtesy: Sanju Abraham

Vulnerability Scanner: Nessus

If you're looking for a vulnerability scanner, chances are you've come across a number of expensive commercial products and tools with long lists of features and benefits. Unfortunately, if you're in the same situation as most of us, you simply don't have the budget to implement fancy high-priced systems. You might have considered compromising by turning to free tools like nmap. However, you probably saw these tools as a compromise, as their feature sets didn't quite match the commercial offerings.

It's time that you learn how to use Nessus! This free tool offers a surprisingly robust feature-set and is widely supported by the information security community. It doesn't take long between the discovery of a new vulnerability and the posting of an updated script for Nessus to detect it. In fact, Nessus takes advantage of the Common Vulnerabilities and Exposures (CVE) architecture that facilitates easy cross-linking between compliant security tools.

The Nessus tool works a little differently than other scanners. Rather than purporting to offer a single, all-encompassing vulnerability database that gets updated regularly, Nessus supports the Nessus Attack Scripting Language (NASL), which allows security professionals to use a simple language to describe individual attacks. Nessus administrators then simply include the NASL descriptions of all desired vulnerabilities to develop their own customized scans.

With the release of Nessus 3 in December 2005, Tenable Network Security Inc., the company behind Nessus, introduced a complete overhaul of the product. The most current version at the time of this writing, Nessus 3.2, was released in March 2008. Nessus is now available for a wide variety of platforms, including Windows, various flavors of Linux, FreeBSD, Solaris and Mac OS X. Here's an overview of the significant changes in Nessus 3:

  • Nessus is now closed-source. The base product is still available for free. With the introduction of Nessus 3, however, Tenable moved Nessus from an open source to a commercial licensing model. In other words, while the software itself remains free, updated vulnerability information will come with a fee, at least for enterprises (home users may download updates for free). Tenable cites the need to invest in the future of Nessus as the motivation for moving to a proprietary license scheme.
  • Significant speed enhancements. In benchmarking tests performed by Tenable, Nessus 3 scans systems at about twice the speed of Nessus 2. This is due to optimizations in the scan engine and a complete overhaul of NASL.
  • Dramatic reduction in resource requirements. Nessus 3 uses significantly less memory and CPU cycles than Nessus 2, allowing simultaneous scanning of a larger number of hosts.

Nessus uses a modular architecture consisting of centralized servers that conduct scanning and remote clients that allow for administrator interaction. You may deploy Nessus scanning servers at various points within your enterprise and control them from a single client. This allows you to effectively scan segmented networks from multiple vantage points and conduct scans of large networks that require multiple servers running simultaneously.

If you're looking for a robust, inexpensive vulnerability scanning product, definitely take Nessus out for a test drive! The tips in this tutorial will guide you along the way.

Nessus Installation on Red Hat Linux

I understand that there are many ways to install and configure Nessus. This tutorial covers only one of them. This tutorial makes several assumptions:
1. You are competent with Windows, Linux and basic networking. If you don’t know how to use command line FTP for example, then this tutorial will be of no use to you.
2. You have 2 computers, one with a Windows and the other with Red Hat, both in good working order. It also assumes that you have at least one supported compiler such as GCC installed on your Red Hat Box.
3. This tutorial is written by me with no references or “borrowed” material. If something doesn’t work or something isn’t clear, yell at me because I am 100% responsible.


On your Red Hat box, from the directory of your choice, ftp to and login anonymously. Once there, path to /pub/nessus/nessus-2.0.7/nessus-installer/ and download

Now that you have all of the software, it’s time to install. Let’s begin with the Nessus engine because it requires most of the work.

1. From the directory where you downloaded, simply type: sh The Nessus installation script will tell you that you need root priviledges to complete the install, press ENTER to continue if you are logged in as root already.
2. Nessus will ask where you want it installed. /usr/local is the default so just hit ENTER when you see the prompt. At this point, Nessus will tell you that it is ready to compile. Hit ENTER and sit back while it compiles. It will take a little while. When it is finished, you’ll see a screen detailing the next steps. Hit ENTER.
3. Now, at this point you have to decide if you want Nessus to start up each time you boot your box or if you just want to start it when you feel like it. To start it when you feel like it, use /usr/local/sbin/nessusd –D. If you want to start it automatically when your box boots up, add /usr/local/sbin/nessusd –D & to /etc/rc.local.
4. Now, decide how you want to handle updating the plugins. You can do it each time the box boots by adding /user/local/sbin/nessus-update-plugins & to /etc/rc.local. You can also copy the nessus-update-plugins script to /etc/cron.daily and it will go out each day and grab the updates.
5. OK, we now have to generate a certificate so go to /usr/local/sbin/ and type nessus-mkcert. This will prompt you for a bunch of information that you would see when generating any SSL certificate. Answer all the questions.
6. Now you have to add a user by running nessus-adduser from /usr/local/sbin. When run, provide a login ID of your choice. When it asks for pass or cert, hit ENTER to accept pass as the auth method. When asked for a password, provide it one. Next you will see a blurb about user rules. Simply hit Ctrl – d and Nessus will verify your input. Type in “y” and Nessus will inform you that the user has been added.

Well now all you have to do is reboot the box to launch Nessus or you need to start the deamon manually as shown in step 3.

OK, now all you have to do is run the installer. On the first screen, click next to continue. Next click the checkbox if you agree to the license, then hit next to continue. The next screen shows the install path, click next to continue. Select Binaries Only, then click next. The next screen names the program group, hit next to continue. It now has all the info to begin installation. Hit next and it will begin. Once this is done, look for the eyeball icon on your desktop. Launch it. It will ask about a nessusdb and all you need to do is say yes to create it.

OK, now you need to configure a session:
1) Form the mune pulldowns, select COMMUNICATIONS, then CONNECT. Enter the IP address of your Nessus server then enter the username you created on the Nessus server. You need to use password authentication and it is your choice to save the password or not. Once you do that, hit CONNECT. Accept the certificate however you like (I always do perminant because I trust the source).
2) From the menu pulldowns, select SESSION then NEW.
3) This will open a window to enter your list of target hosts. Add your hosts in here.
4) Now, each tab has tons of options so I will hit on the key ones for now. Hit the portscan tab and enter the range 1-65535.
5) Hit the plug-ins tab and check “use session specific plugin set”, then hit the select plugins button, then select either all plug-ins (bad idea for a production box that you want to scan) or Non-DOS. Click OK.
6) Now, right click on your session (green book icon) and select EXECUTE.
7) On the next pop-up hit the EXECUTE button and you should see your scan underway.

At this point, you are golden. When the scan is done you can preview it or you can generate a report. I usually select HTML output.

In conclusion, I left out *tons* of options and configs but this tutorial is only intended to get you scanning. You’ll need to look into the docs to explore all this tool has to offer.

Happy scanning!

About Mod_Security and Mod_Dosevasive

What Are These Two Apache Modules and How Can They Help You?
Apache comes by default as a secure web server. However, that by no means implies that there are no methods of improving its security. On the contrary, there are two primary modules available for Apache that will increase its security strengths ten fold. They are mod_security and mod_dosevasive.

It goes without explanation that the internet is a scary, dangerous place. Particularly for web servers, the internet has tons of potential attackers just waiting to attack and cause damage. For this reason, programmers have worked hard to create defense programs and modules, two of the most useful being the Mod_security and Mod_dosevasive modules available for Apache web servers. In the unsafe world of the internet, these modules were created in order to combat hackers and other perpetrators and prevent such attacks as nuke attacks, DoS attacks, and DDoS attacks, amongst others.

Starting with Mod_dosevasive, which can be easily downloaded from Nuclear Elephant at, this module allows for evasive maneuvers in the case of a DoS, DDoS, or similar attack against an Apache web server. This module is most effective when used in conjuction with a firewall or router. It can detect unusually high amounts of requests on the server on a per second basis and prevent these requests, thus evading a potential DoS or DDoS attack by having prevented the attack from consuming bandwith or disk space as it was intended to do. Mod_dosevasive is updated fairly often with improvements to prevent new forms of attacks.

Mod_security, which can also be downloaded from ModSecurity at, is a constantly updated open source protection utility for servers. It acts in a similar fashion to a firewall, although it is most effective when used in conjuction with a firewall for additional protection, by recognizing and disrupting potential known or unknown server attacks. It comes open source meaning it can be easily edited and customized. Particularly, the module can be customized with specific filtering rules for maximum efficiency.

Apache 1.3 and 2.0 Flood/DoS/DDoS Protection with mod_dosevasive (Avoiding Denial of Service Attacks)

With the widespread infection of many computers with viruses, and the ever increasing number of Botnets, DoS and DDoS attacks can be quite frequent and can very easily bring a website to halt for days. This article provides a module solution for apache to help mitigate small http DoS and DDoS attacks.

Download the latest version of mod_dosevasive from:

The lastest version is 1.10 (

Untar it:

tar zxvf mod_dosevasive_1.10.tar.gz

Change into the directory:

cd mod_dosevasive

Compile mod_dosevasive apache module (Apache 2):

/usr/local/apache/bin/apxs -i -a -c mod_dosevasive20.c

or the following for apache 1.3:

/usr/local/apache/bin/apxs -i -a -c mod_dosevasive.c

Replace /usr/local/apache with your path to apache.

Edit your httpd.conf (usually located in /usr/local/apache/conf/httpd.conf):

DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 10
DOSSystemCommand "su - someuser -c '/sbin/... %s ...'"

- DOSHashTableSize: is the size of the table of URL and IP combined
- DOSPageCount: is the number of same page requests from the same IP during an interval that will cause that IP to be added to the block list.
- DOSSiteCount: is the number of pages requested of a site by the same IP during an interval which will cause the IP to be added to the block list.
- DOSPageInterval: is the interval that the hash table for IPs and URLs is erased (in seconds)
- DOSSiteInterval: is the intervale that the hash table of IPs is erased (in seconds)
- DOSBlockingPeriod: is the time the IP is blacked (in seconds)
- DOSEmailNotify: can be used to notify by sending an email everytime an IP is blocked
- DOSSystemCommand: is the command used to execute a command when an IP is blocked. It can be used to add a block the user from a firewall or router.
- DOSWhiteList: can be used to whitelist IPs such as

Although mod_dosevasive can be quite effective in some cases, in others it can cause more problems by blocking non-offending IPs.


Tuesday, May 26, 2009

'netstat' command MAN page

netstat - Print network connections, routing tables, interface statis-
tics, masquerade connections, and multicast memberships


netstat [address_family_options] [--tcp|-t] [--udp|-u] [--raw|-w]
[--listening|-l] [--all|-a] [--numeric|-n] [--numeric-hosts]
[--numeric-ports] [--numeric-users] [--symbolic|-N]
[--extend|-e[--extend|-e]] [--timers|-o] [--program|-p] [--verbose|-v]

netstat {--route|-r} [address_family_options]
[--extend|-e[--extend|-e]] [--verbose|-v] [--numeric|-n] [--numeric-
hosts] [--numeric-ports] [--numeric-users] [--continuous|-c]

netstat {--interfaces|-i} [--all|-a] [--extend|-e[--extend|-e]] [--ver-
bose|-v] [--program|-p] [--numeric|-n] [--numeric-hosts] [--numeric-
ports] [--numeric-users] [--continuous|-c]

netstat {--groups|-g} [--numeric|-n] [--numeric-hosts] [--numeric-
ports] [--numeric-users] [--continuous|-c]

netstat {--masquerade|-M} [--extend|-e] [--numeric|-n] [--numeric-
hosts] [--numeric-ports] [--numeric-users] [--continuous|-c]

netstat {--statistics|-s} [--tcp|-t] [--udp|-u] [--raw|-w]

netstat {--version|-V}

netstat {--help|-h}


[--protocol={inet,unix,ipx,ax25,netrom,ddp}[,...]] [--unix|-x]
[--inet|--ip] [--ax25] [--ipx] [--netrom] [--ddp]


Netstat prints information about the Linux networking subsystem. The
type of information printed is controlled by the first argument, as

By default, netstat displays a list of open sockets. If you don't
specify any address families, then the active sockets of all configured
address families will be printed.

--route , -r
Display the kernel routing tables.

--groups , -g
Display multicast group membership information for IPv4 and IPv6.

--interface, -i
Display a table of all network interfaces.

--masquerade , -M
Display a list of masqueraded connections.

--statistics , -s
Display summary statistics for each protocol.


--verbose , -v
Tell the user what is going on by being verbose. Especially print some
useful information about unconfigured address families.

--numeric , -n
Show numerical addresses instead of trying to determine symbolic host,
port or user names.

shows numerical host addresses but does not affect the resolution of
port or user names.

shows numerical port numbers but does not affect the resolution of host
or user names.

shows numerical user IDs but does not affect the resolution of host or
port names.

--protocol=family , -A
Specifies the address families (perhaps better described as low level
protocols) for which connections are to be shown. family is a comma
(',') separated list of address family keywords like inet, unix, ipx,
ax25, netrom, and ddp. This has the same effect as using the --inet,
--unix (-x), --ipx, --ax25, --netrom, and --ddp options.

The address family inet includes raw, udp and tcp protocol sockets.

-c, --continuous
This will cause netstat to print the selected information every second

-e, --extend
Display additional information. Use this option twice for maximum

-o, --timers
Include information related to networking timers.

-p, --program
Show the PID and name of the program to which each socket belongs.

-l, --listening
Show only listening sockets. (These are omitted by default.)

-a, --all
Show both listening and non-listening sockets. With the --interfaces
option, show interfaces that are not up

Print routing information from the FIB. (This is the default.)

Print routing information from the route cache. UP.


Active Internet connections (TCP, UDP, raw)
The protocol (tcp, udp, raw) used by the socket.

The count of bytes not copied by the user program connected to this

The count of bytes not acknowledged by the remote host.

Local Address
Address and port number of the local end of the socket. Unless the
--numeric (-n) option is specified, the socket address is resolved to
its canonical host name (FQDN), and the port number is translated into
the corresponding service name.

Foreign Address
Address and port number of the remote end of the socket. Analogous to
"Local Address."

The state of the socket. Since there are no states in raw mode and usu-
ally no states used in UDP, this column may be left blank. Normally
this can be one of several values:

The socket has an established connection.

The socket is actively attempting to establish a connection.

A connection request has been received from the network.

The socket is closed, and the connection is shutting down.

Connection is closed, and the socket is waiting for a shutdown
from the remote end.

The socket is waiting after close to handle packets still in the

CLOSE The socket is not being used.

The remote end has shut down, waiting for the socket to close.

The remote end has shut down, and the socket is closed. Waiting
for acknowledgement.

LISTEN The socket is listening for incoming connections. Such sockets
are not included in the output unless you specify the --listen-
ing (-l) or --all (-a) option.

Both sockets are shut down but we still don't have all our data

The state of the socket is unknown.

The username or the user id (UID) of the owner of the socket.

PID/Program name
Slash-separated pair of the process id (PID) and process name of the
process that owns the socket. --program causes this column to be
included. You will also need superuser privileges to see this informa-
tion on sockets you don't own. This identification information is not
yet available for IPX sockets.

(this needs to be written)

Active UNIX domain Sockets
The protocol (usually unix) used by the socket.

The reference count (i.e. attached processes via this socket).

The flags displayed is SO_ACCEPTON (displayed as ACC), SO_WAITDATA (W)
or SO_NOSPACE (N). SO_ACCECPTON is used on unconnected sockets if
their corresponding processes are waiting for a connect request. The
other flags are not of normal interest.

There are several types of socket access:

The socket is used in Datagram (connectionless) mode.

This is a stream (connection) socket.

The socket is used as a raw socket.

This one serves reliably-delivered messages.

This is a sequential packet socket.

Raw interface access socket.

Who ever knows what the future will bring us - just fill in here

This field will contain one of the following Keywords:

FREE The socket is not allocated

The socket is listening for a connection request. Such sockets
are only included in the output if you specify the --listening
(-l) or --all (-a) option.

The socket is about to establish a connection.

The socket is connected.

The socket is disconnecting.

The socket is not connected to another one.

This state should never happen.

PID/Program name
Process ID (PID) and process name of the process that has the socket
open. More info available in Active Internet connections section writ-
ten above.

This is the path name as which the corresponding processes attached to
the socket.

Active IPX sockets
(this needs to be done by somebody who knows it)

Active NET/ROM sockets
(this needs to be done by somebody who knows it)

Active AX.25 sockets
(this needs to be done by somebody who knows it)

'ps' command MAN page


Process status, information about processes running in memory. If you want a repetitive update of this status, use top.


ps option(s)
ps [-L]

-L List all the keyword options

This version of ps accepts 3 kinds of option:

-Unix98 options may be grouped and must be preceeded by a dash.
BSD options may be grouped and must not be used with a dash.
--GNU long options are preceeded by two dashes.

Options of different types may be freely mixed. The PS_PERSONALITY environment variable provides more detailed control of ps behavior.

The Options below are listed side-by-side (unless there are differences).

Simple Process Selection:
-A a select all processes (including those of other users)
-a select all with a tty except session leaders
-d select all, but omit session leaders
-e select all processes
g really all, even group leaders (does nothing w/o SunOS settings)
-N negate selection
r restrict output to running processes
T select all processes on this terminal
x select processes without controlling ttys
--deselect negate selection

Process Selection by List:

-C select by command name
-G select by RGID (supports names)
-g select by session leader OR by group name
--Group select by real group name or ID
--group select by effective group name or ID
-p p --pid select by process ID (PID)
-s --sid select by session ID
-t --tty select by terminal (tty)
-u U select by effective user ID (supports names)
-U select by RUID (supports names)
--User select by real user name or ID
--user select by effective user name or ID

-123 implied --sid
123 implied --pid

Output Format Control:

-c Different scheduler info for -l option
-f Full listing
-j j Jobs format
-l l Long format
-O O Add the information associated with the space or comma separated
list of keywords specified, after the process ID, in the default
information display.

-o o Display information associated with the space or comma separated
list of keywords specified.
--format user-defined format
s display signal format
u display user-oriented format
v display virtual memory format
X old Linux i386 register format
-y do not show flags; show rss in place of addr

Output Modifiers:
C use raw CPU time for %CPU instead of decaying average
c true command name
e show environment after the command
f ASCII-art process hierarchy (forest)
-H show process hierarchy (forest)
h do not print header lines (repeat header lines in BSD personality)
-m m show all threads
-n set namelist file
n numeric output for WCHAN and USER
N specify namelist file
O sorting order (overloaded)
S include some dead child process data (as a sum with the parent)
-w w wide output
--cols set screen width
--columns set screen width
--forest ASCII art process tree
--html HTML escaped output
--headers repeat header lines
--no-headers print no header line at all
--lines set screen height
--nul unjustified output with NULs
--null unjustified output with NULs
--rows set screen height
--sort specify sorting order
--width set screen width
--zero unjustified output with NULs

-V V print version
L list all format specifiers
--help print help message
--info print debugging info
--version print version

A increase the argument space (DecUnix)
M use alternate core (try -n or N instead)
W get swap info from ... not /dev/drum (try -n or N instead)
k use /vmcore as c-dumpfile (try -n or N instead)

The "-g" option can select by session leader OR by group name. Selection by session leader is specified by many standards, but selection by group is the logical behavior that several other operating systems use. This ps will select by session leader when the list is completely numeric (as sessions are). Group ID numbers will work only when some group names are also specified.

The "m" option should not be used. Use "-m" or "-o" with a list. ("m" displays memory info, shows threads, or sorts by memory use)

The "h" option varies between BSD personality and Linux usage (not printing the header) Regardless of the current personality, you can use the long options --headers and --no-headers

Terminals (ttys, or screens of text output) can be specified in several forms: /dev/ttyS1, ttyS1, S1. Obsolete "ps t" (your own terminal) and "ps t?" (processes without a terminal) syntax is supported, but modern options ("T","-t" with list, "x", "t" with list) should be used instead.

The BSD "O" option can act like "-O" (user-defined output format with some common fields predefined) or can be used to specify sort order. Heuristics are used to determine the behavior of this option. To ensure that the desired behavior is obtained, specify the other option (sorting or formatting) in some other way.

For sorting, BSD "O" option syntax is O[+|-]k1[,[+|-]k2[,...]] Order the process listing according to the multilevel sort specified by the sequence of short keys from SORT KEYS, k1, k2, ... The `+' is quite optional, merely re-iterating the default direction on a key. `-' reverses direction only on the key it precedes.
The O option must be the last option in a single command argument, but specifications in successive arguments are catenated.

GNU sorting syntax is --sortX[+|-]key[,[+|-]key[,...]]
Choose a multi-letter key from the SORT KEYS section. X may be any convenient separator character. To be GNU-ish use `='. The `+' is really optional since default direction is increasing numerical or lexicographic order. For example, ps jax --sort=uid,-ppid,+pid

This ps works by reading the virtual files in /proc. This ps does not need to be suid kmem or have any privileges to run. Do not give this ps any special permissions.

This ps needs access to a namelist file for proper WCHAN display. The namelist file must match the current Linux kernel exactly for correct output.

To produce the WCHAN field, ps needs to read the file created when the kernel is compiled. The search path is:

/boot/`uname -r`
/lib/modules/`uname -r`/

The member used_math of task_struct is not shown, since crt0.s checks to see if math is present. This causes the math flag to be set for all processes, and so it is Programs swapped out to disk will be shown without command line arguments, and unless the c option is given, in brackets.

%CPU shows the cputime/realtime percentage. It will not add up to 100% unless you are lucky. It is time used divided by the time the process has been running.

The SIZE and RSS fields don't count the page tables and the task_struct of a proc; this is at least 12k of memory that is always resident. SIZE is the virtual size of the proc (code+data+stack).

Processes marked are dead processes (so-called"zombies") that remain because their parent has not destroyed them properly. These processes will be destroyed by init(8) if the parent process exits.

ALIGNWARN 001 print alignment warning msgs
STARTING 002 being created
EXITING 004 getting shut down
PTRACED 010 set if ptrace (0) has been called
TRACESYS 020 tracing system calls
FORKNOEXEC 040 forked but didn't exec
SUPERPRIV 100 used super-user privileges
DUMPCORE 200 dumped core
SIGNALED 400 killed by a signal

D uninterruptible sleep (usually IO)
R runnable (on run queue)
S sleeping
T traced or stopped
Z a defunct ("zombie") process

For BSD formats and when the "stat" keyword is used, addi­
tional letters may be displayed:
W has no resident pages
< high-priority process
N low-priority task
L has pages locked into memory (for real-time and custom IO)



List every process on the system using standard syntax:
ps -e

List every process on the system using BSD syntax:
ps ax

List the top 10 CPU users.
ps -e -o pcpu -o pid -o user -o args | sort -k 1 | tail -21r

List every process except those running as root (real & effective ID)
ps -U root -u root -N

List every process with a user-defined format:
ps -eo pid,tt,user,fname,tmout,f,wchan

Odd display with AIX field descriptors:
ps -o "%u : %U : %p : %a"

Print only the process IDs of syslogd:
ps -C syslogd -o pid=

When displaying multiple fields, part of the output may be truncated, to avoid this supply a width to the arguments:

ps -e -o user:20,args.

Since ps cannot run faster than the system and is run as any other scheduled process, the information it displays can never be exact.

'.htaccess' file in detail

htaccess (Hypertext Access) is the default name of Apache’s directory-level configuration file. It provides the ability to customize configuration directives defined in the main configuration file. The configuration directives need to be in .htaccess context and the user needs appropriate permissions.

Statements such as the following can be used to configure a server to send out customized documents in response to client errors such as “404: Not Found” or server errors such as “503: Service Unavailable” (see List of HTTP status codes):

ErrorDocument 404 /error-pages/not-found.html
ErrorDocument 503 /error-pages/service-unavailable.html

When setting up custom error pages, it is important to remember that these pages may be accessed from various different URLs, so the links in these error documents (including those to images, stylesheets and other documents) must be specified using URLs that are either absolute (e.g., starting with “http://”) or relative to the document root (starting with “/”). Also, the error page for “403: Forbidden” errors must be placed in a directory that is accessible to users who are denied access to other parts of the site. This is typically done by making the directory containing the error pages accessible to everyone by creating another .htaccess file in the /error-pages directory containing these lines:

Order allow,deny
Allow from all

Password protection

Make the user enter a name and password before viewing a directory.

AuthUserFile /home/newuser/www/stash/.htpasswd
AuthGroupFile /dev/null
AuthName "Protected Directory"
AuthType Basic

require user newuser

The same behavior can be applied to specific files inside a directory.

AuthUserFile /home/newuser/www/stash/.htpasswd
AuthName "Protected File"
AuthType Basic
Require valid-user

Now run this command to create a new password for the user ‘newuser’.

htpasswd /home/newuser/www/stash/.htpasswd newuser

Password unprotection

Unprotect a directory inside an otherwise protected structure:

Satisfy any

Extra secure method to force a domain to only use SSL and fix double login problem

If you really want to be sure that your server is only serving documents over an encrypted SSL channel (you wouldn’t want visitors to submit a htaccess password prompt on an unencrypted connection) then you need to use the SSLRequireSSL directive with the +StrictRequire Option turned on.

SSLOptions +StrictRequire
SSLRequire %{HTTP_HOST} eq "" #or
ErrorDocument 403

An interesting thing when using the mod_ssl instead of mod_rewrite to force SSL is that apache give mod_ssl priority ABOVE mod_rewrite so it will always require SSL. (may be able to get around first method using or

* An in-depth article about what this is doing can be found in the SSL Forum

Enable SSI

AddType text/html .shtml
AddHandler server-parsed .shtml
Options Indexes FollowSymLinks Includes

Deny users by IP address

Order allow,deny
Deny from
Deny from 123.123.7
Allow from all

This would ban anyone with an IP address of and would also ban anyone with an IP address starting in 123.123.7: for example, would not gain access.

Change the default directory page

DirectoryIndex homepage.html

Here, anyone visiting would see the homepage.html page, rather than the default index.html.


Redirect page1.html page2.html

If someone were to visit, he would be sent (with an HTTP status code of 302) to

Prevent hotlinking of images

The following .htaccess rules use mod rewrite.
From specific domains

RewriteEngine on
RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?baddomain1\.com [NC,OR]
RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?baddomain2\.com [NC,OR]
RewriteCond %{HTTP_REFERER} ^http://([^/]+\.)?baddomain3\.com [NC]
RewriteRule \.(gif|jpg)$ [R,L]

Except from specific domains

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?*$ [NC]
RewriteRule \.(gif|jpg)$ [R,L]

Unless the image is displayed on, browers would see the image hotlink.gif.

Note: Hotlink protection using .htaccess relies on the client sending the correct “Referer” value in the http GET request. Programs such as Windows Media Player send a blank referrer, so that attempts to use .htaccess to protect movie files for example are ineffective.
Standardise web address to require www with SEO-friendly 301 Redirect

If an address without the “www.” prefix is entered, this will redirect to the page with the “www.” prefix.

Options +FollowSymLinks
RewriteEngine On
RewriteBase /
RewriteCond %{HTTP_HOST} !^$ #check that HTTP_HOST field is present
RewriteCond %{HTTP_HOST} !^www\.sitename\.com$ [NC] #case-insensitive
RewriteRule ^(.*)$$1 [R=301,L] #301 Redirect, very efficient

See the Ultimate htaccess File for more examples..
Directory rules

A .htaccess file controls the directory it is in, plus all subdirectories. However, by placing additional .htaccess files in the subdirectories, this can be overruled.
User permissions

The user permissions for .htaccess are controlled on server level with the AllowOverride directive which is documented in the Apache Server Documentation.

Friday, May 15, 2009

Custom PHP.ini and .htaccess rules

Describes in exhaustive detail how to change configuration settings and implement a custom php.ini file for use with the Apache Web Server.


  • When php run as Apache Module (mod_php)
  • When php run as CGI
  • When cgi?d php is run with wrapper (for FastCGI)

.htaccess code from Ultimate htaccess file


AddHandler application/x-httpd-php .php .htm


AddHandler php-cgi .php .htm


AddHandler phpini-cgi .php .htm
Action phpini-cgi /cgi-bin/php5-custom-ini.cgi


AddHandler fastcgi-script .fcgi
AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php5-wrapper.fcgi


AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php.cgi

When php run as Apache Module (mod_php)

in root .htaccess

SetEnv PHPRC /location/todir/containing/phpinifile

When php run as CGI

Place your php.ini file in the dir of your cgi’d php, in this case /cgi-bin/

htaccess might look something like this

AddHandler php-cgi .php .htm
Action php-cgi /cgi-bin/php5.cgi

When php is run as cgi

Create a wrapper script called phpini.cgi to export the directory that contains the php.ini file as PHPRC

export PHPRC=/home/site/
exec /user/htdocs/cgi-bin/php5.cgi

In your .htaccess or httpd.conf file

AddHandler php-cgi .php
Action php-cgi /cgi-bin/phpini.cgi

When cgi’d php is run with wrapper (for FastCGI)

You will have a shell wrapper script something like this:

exec /user/htdocs/cgi-bin/php5.cgi

Change To

exec /user/htdocs/cgi-bin/php.cgi -c /home/user/php.ini


  • Since PHP 5.1.0, it is possible to refer to existing .ini variables from within .ini files. open_basedir = ${open_basedir} ":/new/dir"
  • In order for PHP to read it, config file must be named php.ini
  • SetEnv PHPRC only works when using PHP as CGI, not when using php as an Apache Module

Default locations to look for php.ini

PHP looks for custom php.ini in this order:

In the Current working directory

  1. Place your php.ini in the same directory as the php executable.
  2. If php executable is here: /home/user1/htdocs/cgi-bin/
  3. then place your php.ini file here: /home/user1/htdocs/cgi-bin/php.ini

In the path specified by the environment variable PHPRC

  1. If you can use SetEnv in .htaccess files–> in the root .htaccess file specify the path to the directory containing php.ini SetEnv PHPRC /home/user1
  2. If you can’t use SetEnv and you are using a wrapper shell script place this in your wrapper shell script export PHPRC=/home/user1

In the path that was defined at compile time with –with-config-file-path

  • The path in which the php.ini file is looked for can be overridden using the -c argument in command line mode. (cgi) /home/user1/htdocs/cgi-bin/php.cgi -c /home/user1/php.ini
  • With this option one can either specify a directory where to look for php.ini or you can specify a custom INI file directly (which does not need to be named php.ini),$ php -c /custom/directory/custom-file.ini my_script.php
  • Under Windows, the compile-time path is the Windows directory. Place php.ini in one of the directories, C:\windows or C:\winnt

php.ini is searched for in these locations in this order

NOTE: The Apache web server changes the directory to root at startup causing PHP to attempt to read php.ini from the root filesystem if it exists. If php-SAPI.ini exists (where SAPI is used SAPI, so the filename is e.g. php-cli.ini or php-apache.ini), it’s used instead of php.ini. SAPI name can be determined by php_sapi_name(). You can use also use the predefined PHP_SAPI constant instead of php_sapi_name()

Read this article: If your server is running Windows

  1. SAPI module specific location
    • PHPIniDir directive in Apache 2
    • -c command line option in CGI and CLI
    • php_ini parameter in NSAPI
    • PHP_INI_PATH environment variable in THTTPD
  2. The PHPRC environment variable (Before PHP 5.2.0 this was checked after the registry key mentioned below.)
  3. HKEY_LOCAL_MACHINE\SOFTWARE\PHP\IniFilePath (Windows Registry location)
  4. Current working directory (for CLI)
  5. The web server’s directory (for SAPI modules)
  6. Directory of PHP (If Windows)
  7. Windows directory (C:\windows or C:\winnt)
  8. –with-config-file-path compile time option

Directions for custom php.ini for Powweb Customers

Specific to Powweb, but can be used elsewhere.

SetEnv PHPRC /home/users/web/bEXAMPLE/pow.EXAMPLE
  1. In the folder above the htdocs (your ROOT) for the domain you want a custom php.ini file for, create an htaccess file with the above content:
  2. Then create a blank php.ini also in your ROOT directory (/home/users/web/bEXAMPLE/pow.EXAMPLE). Next copy the powweb php.ini text to your php.ini file and customize it.
  3. You can test to make sure you are using the new php.ini by running phpinfo(); If you want multiple php.ini files, then use .htaccess files to set the PHPRC variable to the directory that the php.ini file you want to use is in.

File structure from ROOT directory

| `-- htdocs
| | |-- cgi-bin
| | | `-- dl.cgi
| | `-- index.html
| |-- phpsessions
| |-- php.ini
| `-- .htaccess
| `-- htdocs
| | |-- cgi-bin
| | | `-- dl.cgi
| | `-- index.html
| |-- phpsessions
| |-- php.ini
| `-- .htaccess
`-- htdocs
| |-- cgi-bin
| | `-- dl.cgi
| `-- index.html
|-- phpsession
|-- php.ini
`-- .htaccess

Powweb File Permissions

Remember to chmod 640 all .htaccess files, chmod 600 your php.ini files, chmod 600 your php flies, and chmod 705 your cgi scripts.. if you don’t want ftp users to be able to change the file than chmod 400.


What’s the difference between PHP-CGI and PHP as an Apache module?

Benefits of PHP-CGI

  • php-cgi is more secure. The PHP runs as your user rather than dhapache. That means you can put your database passwords in a file readable only by you and your php scripts can still access it!
  • php-cgi is more flexible. Because of security concerns when running PHP as an Apache module, we disabled commands with the non-CGI PHP. This will cause install problems with certain popular PHP scripts if you run PHP not as a CGI!
  • php-cgi is just as fast as running PHP as an Apache module, and we include more default libraries.

Caveats of PHP-CGI

If one of these is a show-stopper for you, you can easily switch to running PHP as an Apache module and not CGI, but be prepared for a bunch of potential security and ease-of-use issues! If you don’t know what any of these drawbacks mean, you’re fine just using the default setting of PHP-CGI and not worrying about anything!

  • Variables in the URL which are not regular ?foo=bar variables won’t work without using (mod_rewrite)
  • Custom php directives in .htaccess files (php_include_dir /home/user;/home/user/example_dir) won’t work.
  • The $_SERVER['SCRIPT_NAME'] variable will return the php.cgi binary rather than the name of your script
  • Persistant database connections will not work. PHP’s mysql_pconnect() function will just open a new connection because it can’t find a persistant one.

PHP’s configuration file

The configuration file (called php3.ini in PHP 3, and simply php.ini as of PHP 4) is read when PHP starts up. For the server module versions of PHP, this happens only once when the web server is started. Note: For the CGI and CLI version, php.ini is read on every invocation.

Running PHP as Apache module (mod_php)

When using PHP as an Apache module, you can also change the configuration settings using directives in Apache configuration files (e.g. httpd.conf) and .htaccess files. You will need one of these privileges:

AllowOverride Options
AllowOverride All

With PHP 4 and PHP 5, there are several Apache directives that allow you to change the PHP configuration from within the Apache configuration files.

NOTE: With PHP 3, there are Apache directives that correspond to each configuration setting in the php3.ini name, except the name is prefixed by “php3_”.

php_value name value
Sets the value of the specified directive. Can be used only with PHP_INI_ALL and PHP_INI_PERDIR type directives. To clear a previously set value use none as the value.
php_flag name on|off
Used to set a boolean configuration directive. Can be used only with PHP_INI_ALL and PHP_INI_PERDIR type directives.
php_admin_value name value
Sets the value of the specified directive. This can not be used in .htaccess files. Any directive type set with php_admin_value can not be overridden by .htaccess or virtualhost directives. To clear a previously set value use none as the value.
php_admin_flag name on|off
Used to set a boolean configuration directive. This can not be used in .htaccess files. Any directive type set with php_admin_flag can not be overridden by .htaccess or virtualhost directives.

NOTE: Don’t use php_value to set boolean values. use php_flag instead.

Change php settings in .htaccess or httpd.conf

mod_php .htaccess example

add settings to a .htaccess file with ‘php_flag’ like this:

php_flag register_globals off
php_flag magic_quotes_gpc on

In .htaccess, only true/false on/off flags can be set using php_flag. To set other values you need to use php_value, like this:

php_value upload_max_filesize 20M

PHP_INI_SYSTEM can be configured per-directory by placing it inside a per-directory block in httpd.conf

# Selectively enable APC for wildly popular directories
# apc.enabled is Off in php.ini to reduce memory use

php_flag apc.enabled On

NOTE: In order for these settings to work in your htaccess file, you will need to add “Options” to your AllowOverride specifications for the directory/webserver if it’s not already allowed.

Src: How to change configuration settings

php_value include_path ".:/home/askapache/lib/php"
php_admin_flag safe_mode on

php_value include_path ".:/home/askapache/lib/php"
php_admin_flag safe_mode on

php3_include_path ".:/home/askapache/lib/php"
php3_safe_mode on

Modify PHP configuration via Windows Registry

When running PHP on Windows, the configuration values can be modified on a per-directory basis using the Windows registry. The configuration values are stored in the registry key HKLM\SOFTWARE\PHP\Per Directory Values, in the sub-keys corresponding to the path names. For example, configuration values for the directory c:\inetpub\wwwroot would be stored in the key HKLM\SOFTWARE\PHP\Per Directory Values\c\inetpub\wwwroot. The settings for the directory would be active for any script running from this directory or any subdirectory of it. The values under the key should have the name of the PHP configuration directive and the string value. PHP constants in the values are
not parsed. However, only configuration values changeable in PHP_INI_USER can be set this way, PHP_INI_PERDIR values can not.

Methods to modify PHP configuration

Regardless of how you run PHP, you can change certain values at runtime of your scripts through ini_set().

If you are interested in a complete list of configuration settings on your system with their current values, you can execute the phpinfo() function, and review the resulting page. You can also access the values of individual configuration directives at runtime using ini_get() or get_cfg_var().

No input file specified

One of the most common reasons why you get

No input file specified

(AKA ‘the second most useful error message in the world’) is that you have set doc_root (in php.ini) to a value which is to the DocumentRoot defined in the apache configuration.

This is the same for other webservers. For example, on lighttpd, make sure the server.document-root value is the same as what is defined as doc_root in php.ini.


Sylesh H