Friday, August 3, 2012

"No POST", "system won't boot", and "no video output" checklist

SkyHi @ Friday, August 03, 2012

This checklist is a compilation of troubleshooting ideas from many forum members. It's very important to actually perform every step in the checklist if you want to effectively troubleshoot your problem.
 

1. Did you carefully read the motherboard owners manual?
 
2. Did you plug in the 4/8-pin CPU power connector located near the CPU socket? If the motherboard has 8 pins and your PSU only has 4 pins, you can use the 4-pin connector. The 4-pin connector USUALLY goes on the 4 pins located closest to the CPU. If the motherboard has an 8-pin connector with a cover over 4 pins, you can remove the cover and use an 8-pin plug if your power supply has one. This power connector provides power to the CPU. Your system has no chance of posting without this connector plugged in! Check your motherboard owners manual for more information about the CPU power connector. The CPU power connector is usually referred to as the "12v ATX" connector in the owners manual. This is easily the most common new-builder mistake.
 
http://i44.tinypic.com/jtkves.jpg
http://i40.tinypic.com/qx9gdy.png
http://i40.tinypic.com/r6zecn.png
 
3. Did you install the standoffs under the motherboard? Did you place them so they all align with the screw holes in the motherboard, with no extra standoffs touching the board in the wrong place? A standoff installed in the wrong place can cause a short and prevent the system from booting.
 
http://i42.tinypic.com/fwq1ps.jpg
http://i39.tinypic.com/98a7u0.jpg
 
4. Did you verify that the video card is fully seated? (may require more force than a new builder expects.)
 
5. Did you attach all the required power connector(s) to the video card? (some need two, some need none, many need one.)
 
http://i43.tinypic.com/2hcq17b.png
http://i44.tinypic.com/9fpds8.png
 
6. Have you tried booting with just one stick of RAM installed? (Try each stick of RAM individually in each RAM slot.) If you can get the system to boot with a single stick of RAM, you should manually set the RAM speed, timings, and voltage to the manufacturers specs in the BIOS before attempting to boot with all sticks of RAM installed. Nearly all motherboards default to the standard RAM voltage (1.8v for DDR2 & 1.5v for DDR3). If your RAM is rated to run at a voltage other than the standard voltage, the motherboard will underclock the RAM for compatibility reasons. If you want the system to be stable and to run the RAM at its rated specs, you should manually set those values in the BIOS. Many boards don't supply the RAM with enough voltage when using "auto" settings causing stability issues.
 
7. Did you verify that all memory modules are fully inserted? (may require more force than a new builder expects.) It's a good idea to install the RAM on the motherboard before it's in the case.
 
8. Did you verify in the owners manual that you're using the correct RAM slots? Many i7 motherboards require RAM to be installed in the slots starting with the one further away from the CPU which is the opposite of many dual channel motherboards.
 
9. Did you remove the plastic guard over the CPU socket? (this actually comes up occasionally.)
 
10. Did you install the CPU correctly? There will be an arrow on the CPU that needs to line up with an arrow on the motherboard CPU socket. Be sure to pay special attention to that section of the manual!
   
11. Are there any bent pins on the motherboard/CPU? This especially applies if you tried to install the CPU with the plastic cover on or with the CPU facing the wrong direction.
 
12. If using an after market CPU cooler, did you get any thermal paste on the motherboard, CPU socket, or CPU pins? Did you use the smallest amount you could? Here's a few links that may help:
       
13. Is the CPU fan plugged in? Some motherboards will not boot without detecting that the CPU fan is plugged in to prevent burning up the CPU.
 
14. If using a stock cooler, was the thermal material on the base of the cooler free of foreign material, and did you remove any protective covering? If the stock cooler has push-pins, did you ensure that all four pins snapped securely into place? (The easiest way to install the push-pins is outside the case sitting on a non-conductive surface like the motherboard box. Read the instructions! The push-pins have to be turned the OPPOSITE direction as the arrows for installation.) See the link in step 10.
 
15. Are any loose screws laying on the motherboard, or jammed against it? Are there any wires run directly under the motherboard? You should not run wires under the motherboard since the soldered wires on the underside of the motherboard can cut into the insulation on the wires and cause a short. Some cases have space to run wires on the back side of the motherboard tray.
 
16. Did you ensure you discharged all static electricity before touching any of your components? Computer components are very sensitive to static electricity. It takes much less voltage than you can see or feel to damage components. You should implement some best practices to reduce the probability of damaging components. These practices should include either wearing an anti-static wrist strap or always touching a metal part of the case with the power supply installed and plugged in, but NOT turned on. You should avoid building or working on a computer on carpet. Working on a smooth surface is the best if at all possible. You should also keep fluffy the cat, children, and fido away from computer components.
 
17. Did you install the system speaker (if provided) so you can check beep-codes in the manual? A system speaker is NOT the same as normal speakers that plug into the back of the motherboard. A system speaker plugs into a header on the motherboard that's usually located near the front panel connectors. The system speaker is a critical component when trying to troubleshoot system problems. You are flying blind without a system speaker. If your case or motherboard didn't come with a system speaker you can buy one for cheap here: http://www.cwc-group.com/casp.html
 
http://i43.tinypic.com/2lsjlzr.jpg
 
18. Did you read the instructions in the manual on how to properly connect the front panel plugs? (Power switch, power led, reset switch, HD activity led) Polarity does not matter with the power and reset switches. If power or drive activity LED's do not come on, reverse the connections. For troubleshooting purposes, disconnect the reset switch. If it's shorted, the machine either will not POST at all, or it will endlessly reboot.
 
http://i42.tinypic.com/2cftmzb.jpg
http://i42.tinypic.com/20fc18g.jpg
 
19. Did you turn on the power supply switch located on the back of the PSU? Is the power plug on a switch? If it is, is the switch turned on? Is there a GFI circuit on the plug-in? If there is, make sure it isn't tripped. You should also make sure the power cord isn't causing the problem. Try swapping it for a known good cord if you have one available.
 
20. Is your CPU supported by the BIOS revision installed on your motherboard? Most motherboards will post a CPU compatibility list on their website.
 
21. Have you tried resetting the CMOS? The motherboard manual will have instructions for your particular board.
   
22. If you have integrated video and a video card, try the integrated video port. Resetting the bios, can make it default back to the onboard video.
 
23. Make certain all cables and components including RAM and expansion cards are tight within their sockets. Here's a thread where that was the cause of the problem.
 

I also wanted to add some suggestions that user jsc often posts. This is a direct quote from him:
 
"Pull everything except the CPU and HSF. Boot. You should hear a series of long single beeps indicating memory problems. Silence here indicates, in probable order, a bad PSU, motherboard, or CPU - or a bad installation where something is shorting and shutting down the PSU.
 
To eliminate the possiblility of a bad installation where something is shorting and shutting down the PSU, you will need to pull the motherboard out of the case and reassemble the components on an insulated surface. This is called "breadboarding" - from the 1920's homebrew radio days. I always breadboard a new or recycled build. It lets me test components before I go through the trouble of installing them in a case.
 
If you get the long beeps, add a stick of RAM. Boot. The beep pattern should change to one long and two or three short beeps. Silence indicates that the RAM is shorting out the PSU (very rare). Long single beeps indicates that the BIOS does not recognize the presence of the RAM.
 
If you get the one long and two or three short beeps, test the rest of the RAM. If good, install the video card and any needed power cables and plug in the monitor. If the video card is good, the system should successfully POST (one short beep, usually) and you will see the boot screen and messages.
 
Note - an inadequate PSU will cause a failure here or any step later.
Note - you do not need drives or a keyboard to successfully POST (generally a single short beep).
 
If you successfully POST, start plugging in the rest of the components, one at a time."
 

If you suspect the PSU is causing your problems, below are some suggestions by jsc for troubleshooting the PSU. Proceed with caution. I will not be held responsible if you get shocked or fry components.
 
"The best way to check the PSU is to swap it with a known good PSU of similar capacity. Brand new, out of the box, untested does not count as a known good PSU. PSU's, like all components, can be DOA.
 
Next best thing is to get (or borrow) a digital multimeter and check the PSU.
 
Yellow wires should be 12 volts. Red wires: +5 volts, orange wires: +3.3 volts, blue wire : -12 volts, violet wire: 5 volts always on. Tolerances are +/- 5% except for the -12 volts which is +/- 10%.
 
The gray wire is really important. It should go from 0 to +5 volts when you turn the PSU on with the case switch. CPU needs this signal to boot.
 
You can turn on the PSU by completely disconnecting the PSU and using a paperclip or jumper wire to short the green wire to one of the neighboring black wires.
   
This checks the PSU under no load conditions, so it is not completely reliable. But if it can not pass this, it is dead. Then repeat the checks with the PSU plugged into the computer to put a load on the PSU. You can carefully probe the pins from the back of the main power connector."
 

Here's a link to jsc's breadboarding thread:
   

If you make it through the entire checklist without success, Proximon has put together another great thread with a few more ideas here:
   
Here's a couple more sites that may help:
 


REFERENCES

RAID: Why and When

SkyHi @ Friday, August 03, 2012

RAID stands for Redundant Array of Independent Disks (some are taught "Inexpensive" to indicate that they are "normal" disks; historically there were internally redundant disks which were very expensive; since those are no longer available the acronym has adapted).
At the most general level, a RAID is a group of disks that act on the same reads and writes. SCSI IO is performed on a volume ("LUN"), and these are distributed to the underlying disks in a way that introduces a performance increase and/or a redundancy increase. The performance increase is a function of striping: data is spread across multiple disks to allow reads and writes to use all the disks' IO queues simultaneously. Redundancy is a function of mirroring. Entire disks can be kept as copies, or individual stripes can be written multiple times. Alternatively, in some types of raid, instead of copying data bit for bit, redundancy is gained by creating special stripes that contain parity information, which can be used to recreate any lost data in the event of a hardware failure.
There are several configurations that provide different levels of these benefits, which are covered here, and each one has a bias toward performance, or redundancy.
An important aspect in evaluating which RAID level will work for you depends on its advantages and hardware requirements (E.g.: number of drives).
Another important aspect of most of these types of RAID (0,1,5) is that they do not ensure the integrity of your data, because they are abstracted away from the actual data being stored. So RAID does not protect against corrupted files. If a file is corrupted by any means, the corruption will be mirrored or paritied and committed to the disk regardless. However, RAID-Z does claim to provide file-level integrity of your data.

Direct attached RAID: Software and Hardware

There are two layers at which RAID can be implemented on direct attached storage: hardware and software. In true hardware RAID solutions, there is a dedicated hardware controller with a processor dedicated to RAID calculations and processing. It also typically has a battery-backed cache module so that data can be written to disk, even after a power failure. This helps to eliminate inconsistencies when systems are not shut down cleanly. Generally speaking, good hardware controllers are better performers than their software counterparts, but they also have a substantial cost and increase complexity.
Software RAID typically does not require a controller, since it doesn't use a dedicated RAID processor or a separate cache. Typically these operations are handled directly by the CPU. In modern systems, these calculations consume minimal resources, though some marginal latency is incurred. RAID is handled by either the OS directly, or by a faux controller in the case of FakeRAID.
Generally speaking, if someone is going to choose software RAID, they should avoid FakeRAID and use the OS-native package for their system such as Dynamic Disks in Windows, mdadm/LVM in Linux, or ZFS in Solaris, FreeBSD, and other related distributions. FakeRAID use a combination of hardware and software which results in the initial appearance of hardware RAID, but the actual performance of software RAID. Additionally it is commonly extremely difficult to move the array to another adapter (should the original fail).

Centralized Storage

The other place RAID is common is on centralized storage devices, usually called a SAN (Storage Area Network) or a NAS (Network Attached Storage). These devices manage their own storage and allow attached servers to access the storage in various fashions. Since multiple workloads are contained on the same few disks, having a high level of redundancy is generally desirable.
The main difference between a NAS and a SAN is block vs. file system level exports. A SAN exports a whole "block device" such as a partition or logical volume (Including those built on top of a RAID array.). Examples of SAN's include Fibre Channel and iSCSI. A NAS exports a "file system" such as a file or folder. Examples of NAS's include CIFS/SMB (Windows file sharing) and NFS.

RAID 0

Good when: Speed at all costs!

Bad when: You care about your data.

RAID0 (aka Striping) is sometimes referred to as "the amount of data you will have left when a drive fails". It really runs against the grain of "RAID", where the "R" stands for "Redundant".
RAID0 takes your block of data, splits it up into as many pieces as you have disks (2 disks → 2 pieces, 3 disks → 3 pieces) and then writes each piece of the data to a seperate disk.
This means that a single disk failure destroys the entire array (because you have Part 1 and Part 2, but no Part 3), but it provides very fast disk access.
It is not often used in production environments, but it could be used in a situation where you have strictly temporary data that can be lost without repercussions. It is used somewhat commonly for caching devices (such as an L2Arc device).
The total usable disk space is the sum of all the disks in the array added together (e.g. 3x 1TB disks = 3TB of space)
RAID 1

RAID 1

Good when: You have limited number of disks but need redundancy

Bad when: You need a lot of storage space

RAID 1 (aka Mirroring) takes your data and duplicates it identically on two or more disks (although typically only 2 disks). It is the only way to ensure data redundancy when you have less than three disks.
RAID 1 sometimes improves read performance. Some implementations of RAID 1 will read from both disks to double the read speed. Some will only read from one of the disks, which does not provide any additional speed advantages. Others will read the same data from both disks, ensuring the array's integrity on every read, but this will result in the same read speed as a single disk.
It is typically used in small servers that have very little disk expansion, such as 1RU servers that may only have space for two disks or in workstations that require redundancy. Because of its high overhead of "lost" space, it can be cost prohibitive with small-capacity, high-speed (and high-cost) drives, as you need to spend twice as much money to get the same level of usable storage.
The total usable disk space is the size of the smallest disk in the array (e.g. 2x 1TB disks = 1TB of space).
RAID 1

RAID 1E

The 1E RAID level is similar to RAID 1 - data is always written to (at least) two disks. But unlike RAID1, it allows for an odd number of disks by simply interleaving the data blocks among several disks.
Performance characteristics are similar to RAID1, fault tolerance is similar to RAID 10.
RAID 1E

RAID 10

Good when: You want speed and redundancy

Bad when: You can't afford to lose half your disk space

RAID 10 is a combination of RAID 1 and RAID 0. The order of the 1 and 0 is very important. Say you have 8 disks, it will create 4 RAID 1 arrays, and then apply a RAID 0 array on top of the 4 RAID 1 arrays. It requires at least 4 disks, and additional disks have to be added in pairs.
This means that one disk from each pair can fail. So if you have sets A, B, C and D with disks A1, A2, B1, B2, C1, C2, D1, D2, you can lose one disk from each set (A,B,C or D) and still have a functioning array.
However, if you lose two disks from the same set, then the array is totally lost. You can lose up to (but not guaranteed) 50% of the disks.
You are guaranteed high speed and high availability in RAID 10.
RAID 10 is a very common RAID level, especially with high capacity drives where a single disk failure makes a second disk failure more likely before the RAID array is rebuilt. During recovery, the performance degradation is much lower than its RAID 5 counterpart as it only has to read from one drive to reconstruct the data.
The avaliable disk space is 50% of the sum of the total space. (e.g. 8x 1TB drives = 4TB of usable space). If you use different sizes, only the smallest size will be used from each disk.
RAID 10

RAID 01

Good when: never

Bad when: always

It is the reverse of RAID 10. It creates two RAID 0 arrays, and then puts a RAID 1 over the top. This means that you can lose one disk from each set (A1, A2, A3, A4 or B1, B2, B3, B4).
To be absolutely clear:
  • If you have a RAID10 array with 8 disks and one dies (we'll call it A1) then you'll have 6 redundant disks and 1 without redundancy. If another disk dies there's a 85% chance your array is still working.
  • If you have a RAID01 array with 8 disks and one dies (we'll call it A1) then you'll have 4 redundant disks and 3 without redundancy. If another disk dies there's a 57% chance your array is still working.
It provides no additional speed over RAID 10, but substantially less redundancy and should be avoided at all costs.

RAID 5

Good when: You want a balance of redundancy and disk space or have a large sequential write workload.

Bad when: You have a high random write workload or large drives.

RAID 5 uses a simple XOR operation to calculate parity. Upon single drive failure, the information can be reconstructed from the remaining drives using the XOR operation on the known data.
However, this rebuilding process is very intensive and array performance can suffer heavily during the rebuild procedure. Given the lengthy amount of time it takes to perform this rebuild (days!) it is not usually recommended to use RAID5 on large drives, as the risk of data loss during a rebuild is increased relative to disk size. Because every sector on every disk is needed to calculate the data to be rebuilt on the replaced disk, there is an increased risk of data loss from an already present but previously undetected error, or a new failure from the stress of the rebuild, on one of those disks - either in the form of an unrecoverable read error, or silent data corruption (bit flipping).
Another caveat is the "write hole," which describes a condition where a failure of some kind (power, system, RAID controller, etc.) occurs mid-write, with new data being written to some, but not all, of a parity stripe. In this situation, it's not clear to the device what data can be trusted and what cannot - the entire stripe is rendered corrupt. Battery-backed write caches on RAID controllers are the main mitigation technique for this issue.
RAID 5 has a high disk write overhead on small writes. Writes less than a stripe width in size cause an extra read prior to the write, causing a single frontend IOP to turn into 4 backend IOPs. The small write penalty is mitigated by having controller-based write-back caches capable of taking up the entire I/O write load of your system. However, once the cache fills up (or crams), all activity on the array will cease until enough data has been synced do the disks. RAID5 can also give better performance than RAID10 for large sequential writes since it has more spindles to distribute writes onto. The computational overhead of the parity calculation is hardly a concern nowadays - hardware I/O processors used in more recent RAID controllers are capable of calculating parity at full wirespeed of the interface.
However, RAID 5 is the most cost effective solution of adding redundant storage to an array, as it requires the loss of only 1 disk (E.g. 4x 1TB disks = 3TB of usable space). It requires a minimum of 3 disks.
There has been a lot of controversy about striping with parity in the past, many people regard it not being worth the money if you need high array performance in terms of IOPS - which is a typical requirement for database applications or SAN systems used for virtualization. The Battle Against Any RAID Five (BAARF) initiative website provides a lot of information about the internal workings of striping with parity and its caveats.
RAID 5

RAID 6

Good when: You want a balance of redundancy and disk space or have a large sequential write workload.

Bad when: You have a high random write workload.

RAID 6 is similar to RAID 5 but it uses two disks worth of parity instead of just one (the first is XOR, the second is a LSFR), so you can lose two disks from the array with no data loss. The write penalty is higher than RAID 5 and you have one less disk of space, so this option is best geared towards arrays that do a lot of reads or large sequential writes and when RAID 10 isn't an option because of capacity.
RAID 6

RAID 50

Good when: You have a lot of disks that need to be in a single array and RAID 10 isn't an option because of capacity.

Bad when: You have so many disks that many simultaneous failures are possible before rebuilds complete. Or when you don't have many disks.

RAID 50 is a nested level, much like RAID 10. It combines two or more RAID 5 arrays and stripes data across them in a RAID 0. This offers both performance and multiple disk redundancy, as long as multiple disks are lost from different RAID 5 arrays.
In a RAID 50, disk capacity is n-x, where x is the number of RAID 5s that are striped across. For example, if a simple 6 disk RAID 50, the smallest possible, if you had 6x1TB disks in two RAID 5s that were then striped across to become a RAID 50, you would have 4TB usable storage.

RAID 60

Good when: You have a similar use case to RAID 50, but need more redundancy.

Bad when: You don't have a substantial number of disks in the array.

RAID 6 is to RAID 60 as RAID 5 is to RAID 50. Essentially, you have more than one RAID 6 that data is then striped across in a RAID 0. This setup allows for up to two members of any individual RAID 6 in the set to fail without data loss. Rebuild times for RAID 60 arrays can be substantial, so it's usually a good idea to have one hot-spare for each RAID 6 member in the array.
In a RAID 60, disk capacity is n-2x, where x is the number of RAID 6s that are striped across. For example, if a simple 8 disk RAID 60, the smallest possible, if you had 8x1TB disks in two RAID 6s that were then striped across to become a RAID 60, you would have 4TB usable storage. As you can see, this gives the same amount of usable storage that a RAID 10 would give on an 8 member array. While RAID 60 would be slightly more redundant, the rebuild times would be substantially larger. Generally, you want to consider RAID 60 only if you have a large number of disks.

RAID-Z

Good when: You are using ZFS on a system that supports it.

Bad when: Performance demands hardware RAID acceleration.

RAID-Z is a bit complicated to explain since ZFS radically changes how storage and file systems interact. ZFS encompasses the traditional roles of volume management (RAID is a function of a Volume Manager) and file system. Because of this, ZFS can do RAID at the file's storage block level rather than at the volume's strip level. This is exactly what RAID-Z does, write the file's storage blocks across multiple physical drives including a parity block for each set of stripes.
An example may make this much more clear. Say you have 3 disks in a ZFS RAID-Z pool, the block size is 4KB. Now you write a file to the system that is exactly 16KB. ZFS will split that into four 4KB blocks (as would a normal operating system); then it will calculate two blocks of parity. Those six blocks will be placed on the drives similar to how RAID-5 would distribute data and parity. This is an improvement over RAID5 in that there was no reading of existing data stripes to calculate the parity.
Another example builds on the previous. Say the file was only 4KB. ZFS will still have to build one parity block, but now the write load is reduced to 2 blocks. The third drive will be free to service any other concurrent requests. A similar effect will be seen anytime the file being written is not a multiple of the pool's block size multiplied by the number of drives less one (ie [File Size] <> [Block Size] * [Drives - 1]).
ZFS handling both Volume Management and File System also means you don't have to worry about aligning partitions or stripe-block sizes. ZFS handles all that automatically with the recommended configurations.
The nature of ZFS counteracts some of the classic RAID-5/6 caveats. All writes in ZFS are done in a copy-on-write fashion; all changed blocks in a write operation are written to a new location on disk, instead of overwriting the existing blocks. If a write fails for any reason, or the system fails mid-write, the write transaction either occurs completely after system recovery (with the help of the ZFS intent log) or does not occur at all, avoiding potential data corruption. Another issue with RAID-5/6 is potential data loss or silent data corruption during rebuilds; regular zpool scrub operations can help to catch data corruption or drive issues before they cause data loss, and checksumming of all data blocks will ensure that all corruption during a rebuild is caught.
The main disadvantage to RAID-Z is that it is still software raid (and suffers from the same minor latency incurred by the CPU calculating the write load instead of letting a hardware HBA offload it). This may be resolved in the future by HBA's that support ZFS hardware acceleration.


REFERENCES
http://serverfault.com/questions/339128/what-are-the-different-widely-used-raid-levels-and-when-should-i-consider-them

Wednesday, August 1, 2012

xinetd Access Control Tools

SkyHi @ Wednesday, August 01, 2012

Many Linux systems run a super-server, or a software server that handles the initial stages of connection requests on behalf of other software servers. Super servers typically handle connections for small (or seldom-used) servers, such as Telnet, FTP, POP, IMAP, SWAT, SANE, and VNC, just to name a few. Each of those servers can be run standalone without the benefit of a super server, but running a super server typically results in reduced memory load when the servers it proxies for aren’t being accessed, albeit with some overhead and lag during connection attempts. (Some servers cannot be managed by a super server.)
One big advantage of using a super-server is the extra layer of security it provides. A modern super-server, such as xinetd, includes built-in access control features that can limit access to subjugated servers. Unfortunately, most out-of-the-box configurations don’t utilize the many access control features of xinetd, so this month, let’s look at how to enhance the security of your network services using xinetd as a watchful mediator.
xinetd Basics
One of xinetd ’s advantages over the older inetd is that xinetd supports an includedir directive in its main configuration file, /etc/xinetd.conf. When this line is present, all of the files in the specified directory (except for those whose names end in a tilde or that include a dot) are read and parsed as supplementary configuration files. Every major distribution that uses xinetdas its super-server includes an includedir directive in its main xinetd.conf file; the directive typically points to /etc/xinetd.d.
By dividing the configuration of xinetd into a collection of files, each server package can include or generate its own xinetd configuration file, such as /etc/xinetd.d/cups-lpd for the LPDserver component of the Common Unix Printing System (CUPS) server. When the CUPS package is installed, xinetd is automatically configured to use the new server — or it can be, if the cups-lpd file activates the server. Frequently, packages leave the server inactive for security reasons.
To activate a newly-installed server, edit its xinetd configuration file. You’ll see an entry that looks something like this:
service printer
{
  socket_type = stream
  protocol    = tcp
  wait        = no
  user        = lp
  server      = /usr/lib/cups/daemon/cups-lpd
  disable     = yes
}
To enable the server, you must change the line disable=yes to read disable=no. You must then reload or restart xinetd,, typically by typing /etc/init.d/xinetd reload or /etc/init.d/xinetd restart. (The exact path to the xinetd startup script varies from one system to another, though.) After making this change, the new server should be available. If it isn’t, check your system log file (typically /var/log/messages) for error messages.
IP-Based Controls
One of the most basic types of xinetd access controls restrict connections by IP address. Inxinetd, these controls are implemented by the only_from and no_access directives, which look like this:
only_from = 192.168.78.0
no_access = 192.168.78.24 bad.example.com
These lines should be added to a server’s /etc/xinetd.d/ control file, anywhere between the opening brace ({) and the closing brace (}). In the example above access is granted to all computers in the 192.168.78.0/24 network, except for 192.168.78.24 and bad.example.com. If you use both the only_from and no_access directives, the one with the more specific machine description takes precedence. You can list multiple entries in either directive by separating the entries with spaces.
Computers may be specified in any of several ways:
*Machine IP address. The safest way is usually to specify a machine’s IP address, such as the192.168.78.24 in the preceding example.
*Network IP address. Multiple computers’ IP addresses may be specified in several ways. One is to close the address with one or more “0” entries, as in 192.168.78.0 in the preceding example, for 192.168.78.0/24. You can also specify the netmask explicitly, as in192.168.78.0/24.
*Machine name. You can specify a hostname, as in bad.example.com in the preceding example. When you do so, xinetd looks up the hostname when it starts up or reloads its configuration. Thus, if the hostname changes while xinetd is running, the super server won’t catch this change.
*Network name. You can specify a network by using a name that appears in the /etc/networksfile.
As a general rule, if a server should remain inaccessible to some or most computers on the Internet, using IP-based restrictions is a good starting point. The only_from directive alone is often appropriate. Using no_access alone is handy if you just want to restrict a few systems you know to be troublemakers. Using both directives together lets you grant access to entire networks while restricting access from certain machines. For instance, you might want to block access to a server from a router if that router shouldn’t be using the server, as a precaution in case the router is compromised from the outside.
Of course, if you omit both directives, no restrictions are implemented.
Network Interface Controls
Sometimes you may want to limit access to particular servers based on the network interface. For instance, on a router or firewall, you might want to give access to a server from the internal network but not from the external network. This technique can even be useful on a non-router system: you can give access to a server to the loopback interface, thus ensuring that only clients running on the server computer itself may access the server program. This technique can be handy for tools such as the Samba Web Administration Tool (SWAT), which provides web-based administration for Samba. You can use SWAT from the local machine, but not run the risks of having it compromised because it’s on a network.
Network interface controls are implemented at a lower level than IP-based controls. This makes the network interface controls more fool-proof, as they can’t be tricked by IP address spoofing, for instance.
To implement these controls, use the bind or interface directive, which are synonymous:
bind = 192.168.78.27
This directive binds the server to the interface that uses the 192.168.78.27 IP address. If a request comes in for the server from another interface, xinetd ignores it.
You can use this feature to have xinetd respond differently to different interfaces. Create two or more service definitions, each of which includes a bind directive for a particular interface. You can then tweak access control, logging, and other options for each interface.
For instance, you might set up temporal controls (described next) for a true network interface, but omit those same controls from the loopback interface. If you omit the bind and interfaceoptions, xinetd listens to all available network interfaces.
Temporal Controls
Sometimes a server should only be available at certain times. For instance, you might want to make a login server accessible to local users during business hours, but not late at night when the office should be empty. To accommodate such requirements, xinetd provides theaccess_times option, which takes a range of times on a 24-hour clock, separated by a dash (-):
access_times = 7:00-18:00 
This example enables the server from 7:00 AM to 6:00 PM. Note that these times are theinitial access times. Somebody could access the server at 5:58 PM and remain logged in for hours. xinetd won’t terminate an already established connection. If you don’t use anaccess_time option, xinetd imposes no temporal restrictions.
Client Number Controls
When a particular software service becomes too popular, the load on the host computer can become so great as to make the service — and perhaps other services, too — useless to all users. Some servers provide configuration options to limit the number of simultaneous users to avoid such problems, and xinetd provides similar facilities for the servers it manages, whether or not they provide their own connection number controls. In fact, xinetd provides three throttling options: instancesper_source, and cps.
instances  = 50
per_source = 20
cps        = 5 2
The instances option limits the total number of simultaneous connections xinetd will accept for a server. The default value is UNLIMITED, which imposes no limit. Setting a numeric value causes xinetd to refuse connection attempts if the specified number of connections are already active.
The per_source option is similar to instances, but it applies only to a single client computer. For instance, the preceding combination of instances and per_source limits the total number of connections to 50, of which up to 20 may be from a single IP address.
The cps option limits the rate of incoming connections per second. It takes two arguments: the number of connections it accepts per second and the number of seconds to wait before accepting new connections if the limit is exceeded. In cps=5 2xinetd accepts up to five incoming connections per second; if more than that number arrive, the super-server refuses new connections for two seconds. This option can be helpful in limiting damage from denial-of-service (DoS) attacks and in controlling bursts of activity (such as a small web site being “Slashdotted”). The default is cps=50 10.
Reasonable values for all of these options depend on the server being managed, the capabilities of the underlying host computer, the speed of the computer’s network connection, and your own tolerances for sluggish operation. You might set low values for big, sluggish servers running on slow hardware, and set higher values for small quick servers running on fast hardware. If you’re unsure what values to set, try monitoring normal activity. If your system’s performance isn’t too sluggish with the highest loads you see, try setting limits above the peak values you see.
CPU Load Controls
Another approach to managing the impact of incoming connections is to have xinetd monitor system load and accept or refuse connections based on activity. The main method of imposing such limits is to use the max_load option, which takes a one-minute CPU load value as a value:
max_load = 3.5
This example tells xinetd to refuse connections if the computer’s one-minute CPU load average exceeds 3.5. (The Linux uptime command displays 1-, 5-, and 15-minute load averages, if you want to check these manually.)
A load average of 1.0 means that programs are requesting CPU time to keep the CPU fully loaded at all times. Values below 1.0 mean that the CPU is sitting idle some of the time, while values above 1.0 indicate that programs are demanding more CPU time than is available, forcing Linux to ration CPU time. Reasonable load averages depend on the computer’s speed and purpose. A very fast computer might perform well with load averages substantially above 1.0, but an old computer might perform poorly even with a load average of 1.0. (Bringing the load average below 1.0 won’t improve the performance of any programs that remain running; this state usually means that programs are spending a lot of time waiting for input. Sluggish performance with lower CPU load averages typically mean there’s a bottleneck somewhere other than the CPU, such as the video card or hard disk.)
Setting the max_load option lets you block incoming connections if your computer is too busy. This limit has the advantage of taking into consideration all of the activity on the computer, even including standalone servers and local programs. Most other xinetd restrictions apply only to a specific server that it manages.
Several other system load options are supported by xinetd. These take the form rlimit_ thing, where thing is the thing being limited. These are rlimit_cpurlimit_datarlimit_stackrlimit_rss, andrlimit_as, which impose limits on CPU time, data size, stack size, resident set size (RSS), and address space (AS) size, respectively.
The rlimit_cpu option limits the amount of CPU time, in seconds, that a server launched byxinetd may consume. The remaining options are all measures of memory use, of whichrlimit_as is the most useful on Linux. These memory options all take a number of bytes as a value, although you can append a K or M to specify the memory use in kilobytes or megabytes, respectively. Once a server exceeds the specified limit, xinetd terminates it. These options are particularly handy if a server might be abused in such a way that it’s likely to consume inordinate system resources, or if it’s got a known bug that can cause its resource consumption to spiral out of control.
Putting It All Together
Wrapping up, consider Listing One, which shows a modified CUPS LPD server entry that implements several of the options just described.
Listing One: xinetd Configuration that demonstrates the super-server’s access control options
service printer
{
socket_type = stream
protocol = tcp
wait = no
user = lp
server = /usr/lib/cups/daemon/cups-lpd
disable = no
bind = 192.168.78.27
only_from = 192.168.78.0
no_access = 192.168.78.24 bad.example.com
access_times = 7:00-18:00
instances = 50
per_source = 20
max_load = 3.5
rlimit_cpu = 20
}
The order of options within the braces of a service definition is unimportant. Listing One ’s configuration is much like the default configuration, but it adds many options: it binds the server to a single network interface, accepts connections from just one set of IP addresses (with a couple of systems excluded), accepts connections only from 7:00 AM to 6:00 PM, enables just 50 instances to be active (20 from any one IP address), refuses connections if the CPU load is above 3.5, and limits the server to consume 20 seconds of CPU time.
A configuration such as the one in Listing One can improve overall security by limiting who can connect to the server, when a client may connect, and what sort of resources requests can consume.
Although xinetd options aren’t a substitute for other security measures (such as iptablesfirewall rules and good password practices), they can be part of a complete security plan for your system, and are therefore well worth investigating further.

#man xinetd.conf


http://www.linux-mag.com/id/2402/