Unix etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster
Unix etiketine sahip kayıtlar gösteriliyor. Tüm kayıtları göster

7 Ocak 2011 Cuma

Red Hat Enterprise Linux Technology capabilities and limits (supported[/theoretical])

http://www.redhat.com/rhel/compare/

Red Hat Enterprise Linux Technical Exceptions HP ProLiant & BladeSystem Server

http://h18000.www1.hp.com/products/servers/linux/supportmatrix/rhel/exceptions/rhel-exceptions.html

Redhat Enterprise Linux Kernel Upgrade










Kernel Upgrade Introduction


In this KB, we will see how to upgrade the kernel on Red Hat Linux Enterprise systems.  We will show you how to perform a automated upgrades of the kernel using the yum and rpm package management tools.

Before continuing on, a word of caution.  I recommend avoiding kernel updates unless absolutely necessary.  Generally speaking, if it's not broke, don't try and fix it.  So unless there is a vulnerability or lack of hardware/driver support in the currently installed kernel on your system, I would recommend against the installation of a new/updated kernel.  Installing a new kernel could have wide spread and unknown consequences to your system and/or might not be supported by your hardware/installed software.  Now that I got that off my chest, I'll get off my soapbox so we can get to work!





Kernel Upgrade/Update on Red Hat Enterprise Linux


Red Hat use a combination of tools such as yum for package retrieval and rpm for package management.  If you system is registered with the Red Hat Network (RHN), yum will automatically download and resolve dependencies for rpm based installs.  If you system is not registered with RHN, you can still upgrade the kernel, but you will have to manually resolve any dependencies (downloading and installing any dependent rpms.)  Yum will connect to the RHN (Red Hat Network) and scan the repository for updated packages (including your kernel.)  If there are updates available, yum will resolve their dependencies and mark the additional packages for download/installation.  Once yum has downloaded all the package(s), it will begin installing the packages using rpm (this will configure and register the package with the OS.)

  1. Before we can use yum, we must first register the server with the RHN (Red Hat Network).



    Note: If you registered your OS at the time of installation, you can skip this step



    Begin the registration process



    [root@RHEL01 ~]# rhn_register


    You'll be presented with a screen similar to the following, click Next to continue:

    Red Hat Network (RHN) Setup



    Enter your login information for the Red Hat Network (RHN) and click Next:

    Red Hat Network (RHN) Login



    Enter a profile name for the server and click Next:

    Red Hat Network (RHN) Profile



    Leave the defaults selected and click Next:

    Red Hat Network (RHN) Packages



    Click Next on the Send Profile page:

    Red Hat Network (RHN) Send Profile



    Your system's information will be sent/registered with the Red Hat Network (RHN):

    Red Hat Network (RHN) Sending Profile



    Click OK on the subscription details page:

    Red Hat Network (RHN) Subscription Details



    Click Finish to complete the registration with the RHN:

    Red Hat Network (RHN) Finish






  2. Check to see if there are updates available for the system



    Check for updates to the kernel package only:









    [root@RHEL01 ~]# yum check-update kernel
    Loaded plugins: rhnplugin




    kernel.x86_64 2.6.18-128.1.14.el5 rhel-x86_64-server-5
    root@ubuntu:~#
    Note: If there are no updates available, no output will be produced



    Or



    Check for all packages (including the kernel):















    [root@RHEL01 ~]# yum check-update
    Loading "installonlyn" plugin
    Loading "rhnplugin" plugin
    Setting up repositories
    rhel-x86_64-server-5 100% |=========================| 1.3 kB 00:00
    Reading repository metadata in from local files
    primary.xml.gz 100% |=========================| 2.3 MB 00:01
    ################################################## 7019/7019










    Deployment_Guide-en-US.noarch 5.2-11 rhel-x86_64-serv
    NetworkManager.x86_64 1:0.7.0-4.el5_3 rhel-x86_64-serv
    ORBit2.x86_64 2.14.3-5.el5 rhel-x86_64-serv
    OUTPUT TRUNCATED
    kernel.x86_64 2.6.18-128.1.14.el5 rhel-x86_64-serv
    kernel-headers.x86_64 2.6.18-128.1.14.el5 rhel-x86_64-serv
    OUTPUT TRUNCATED
    [root@Linux01 rhn]#
    Note: yum returns a list of all available packages




  3. A quick check of the system reveals we are running on v2.6.18-8.el5 of the Linux kernel



















    [root@RHEL01 ~]# rpm -qi kernel
    Name : kernel Relocations : (not relocatable)
    Version : 2.6.18 Vendor : Red Hat, Inc.
    Release : 8.el5 Build Date : Fri 26 Jan 2007 02:47:08 PM EST
    Install Date : Wed 24 Jun 2009 8:50:56 AM EDT Build Host : ls20-bc1-14.build.redhat.com
    Group : System Environment/Kernel Source RPM : kernel-2.6.18-8.el5.src.rpm
    Size : 75875879 License : GPLv2
    Signature : DSA/SHA1, Fri 26 Jan 2007 09:12:24 PM EST, Key ID 5326810137017186
    Packager : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
    Summary : The Linux kernel (the core of the Linux operating system)
    Description :

    The kernel package contains the Linux kernel (vmlinuz), the core of any

    Linux operating system.  The kernel handles the basic functions

    of the operating system:  memory allocation, process allocation, device

    input and output, etc.
    [root@RHEL01 ~






    [root@RHEL01 ~]# uname -a

    Linux RHEL01 2.6.18-8.el5 #1 SMP Fri Jan 26 14:15:14 EST 2007 x86_64 x86_64 x86_64 GNU/Linux

    [root@RHEL01 ~]#



  4. Install the new kernel using yum



    Update the kernel package only:
















    [root@RHEL01 ~]# yum update kernel
    Loading "rhnplugin" plugin
    Setting up Update Process
    Resolving Dependencies
    --> Running transaction check
    ---> Package kernel.x86_64 0:2.6.18-128.1.14.el5 set to be installed
    --> Finished Dependency Resolution
    .
    Dependencies Resolved















    ==========================================================
    Package Arch Version Repository Size
    ==========================================================
    Installing: kernel x86_64 2.6.18-128.1.14.el5 rhel-x86_64-server-5 17 M
    .
    Transaction Summary
    ==========================================================
    Install 1 Package(s)



    Update 0 Package(s)



    Remove 0 Package(s)



    .
    Total download size: 17 M
    Is this ok [y/N]: y


    Or



    Update all packages including the kernel:





















    [root@RHEL01 ~]# yum update
    Loading "installonlyn" plugin
    Loading "rhnplugin" plugin
    Setting up Update Process
    Setting up repositories
    rhel-x86_64-server-5 100% |=========================| 1.3 kB 00:00
    Reading repository metadata in from local files
    primary.xml.gz 100% |=========================| 2.3 MB 00:01
    ################################################## 7019/7019
    Resolving Dependencies
    --> Populating transaction set with selected packages. Please wait.
    ---> Downloading header for redhat-lsb to pack into transaction set.
    redhat-lsb-3.1-12.3.EL.x8 100% |=========================| 12 kB 00:00
    ---> Package redhat-lsb.x86_64 0:3.1-12.3.EL set to be updated













    OUTPUT TRUNCATED
    .
    Transaction Summary
    ==========================================================
    Install 33 Package(s)
    Update 270 Package(s)
    Remove 0 Package(s)
    .
    Total download size: 255 M
    Is this ok [y/N]: y


    Note: There may also be other available packages that need updating on the system (as seen above.)  Yum should take care of the kernel as well as any other packages and associated dependencies on the system.



    After answering yes to the continue question, yum will pull all the new packages and their dependencies from the RHN repository.  It will then configure them, and update your boot loader to use the new kernel (requires a reboot).




  5. The new kernel is now installed and ready for use.  Reboot the system to boot into the new kernel image





    [root@RHEL01 ~]# init 6

  6. Once the system has come back up, login and verify your running from the new kernel image





    [root@RHEL01 ~]# uname -a

    Linux REHL01 2.6.18-128.1.14.el5 #1 SMP Mon Jun 1 15:52:58 EDT 2009 x86_64 x86_64 x86_64 GNU/Linux

    [root@RHEL01 ~]#



Good luck on the kernel upgrade!

1 Ocak 2011 Cumartesi

iptables firewall example (standolone)

[root@eloise ~]# cat /usr/bin/firewall

#!/bin/sh



IPT="/sbin/iptables"

xxx="x.x.x.x/28"

home="x.x.x.x/32"

anywhere="$anywhere/0"

extip="x.x.x.x/32"

extif="eth0"



echo -e "\n\nSETTING UP IPTABLES FIREWALL..."



# Flush old rules, old custom tables

$IPT --flush

$IPT --delete-chain



# Set default policies for all three default chains

$IPT -P INPUT DROP

$IPT -F INPUT

$IPT -P OUTPUT ACCEPT

$IPT -F OUTPUT

$IPT -P FORWARD DROP

$IPT -F FORWARD

$IPT -F -t nat



# Flush the user chain.. if it exists

if [ "`$IPT -L | grep drop-and-log-it`" ]; then

   $IPT -F drop-and-log-it

fi



# Delete all User-specified chains

$IPT -X



# Reset all IPTABLES counters

$IPT -Z



# Creating a DROP chain

$IPT -N drop-and-log-it

$IPT -A drop-and-log-it -j LOG --log-level info

$IPT -A drop-and-log-it -j REJECT



# Enable free use of loopback interfaces

$IPT -A INPUT -i lo -j ACCEPT

$IPT -A OUTPUT -o lo -j ACCEPT



# All TCP sessions should begin with SYN

$IPT -A INPUT -p tcp ! --syn -m state --state NEW -s $anywhere -j DROP



# Allow any related traffic coming back to the MASQ server in

#iptables -A INPUT -i eth0 -s $anywhere -d $extip -m state --state ESTABLISHED,RELATED -j ACCEPT



#  inbound TCP packets

$IPT -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

$IPT -A INPUT -p tcp  -m state --state NEW -s $xxx-j ACCEPT

$IPT -A INPUT -p tcp  -m state --state NEW -s $home -j ACCEPT

$IPT -A INPUT -p tcp --dport 20 -m state --state NEW -s $anywhere -j ACCEPT

$IPT -A INPUT -p tcp --dport 21 -m state --state NEW -s $anywhere -j ACCEPT

$IPT -A INPUT -p tcp --dport 25 -m state --state NEW -s $anywhere -j ACCEPT

$IPT -A INPUT -p tcp --dport 53 -m state --state NEW -s $home -j ACCEPT

$IPT -A INPUT -p tcp --dport 80 -m state --state NEW -s $anywhere -j ACCEPT

$IPT -A INPUT -p tcp --dport 110 -m state --state NEW -s $anywhere -j ACCEPT

$IPT -A INPUT -p tcp --dport 143 -m state --state NEW -s $anywhere -j ACCEPT

$IPT -A INPUT -p tcp --dport 443 -m state --state NEW -s $anywhere -j ACCEPT



#  inbound UDP packets

#$IPT -A INPUT -p udp -m udp --dport 123 -s $anywhere -j ACCEPT

$IPT -A INPUT -p udp -m udp --dport 53 -s $anywhere -j ACCEPT

$IPT -A INPUT -p udp -m udp --dport 21 -s $anywhere -j ACCEPT



#  inbound ICMP messages

$IPT -A INPUT -p ICMP --icmp-type 8 -s $xxx-j ACCEPT

$IPT -A INPUT -p ICMP --icmp-type 8 -s $home -j ACCEPT

$IPT -A INPUT -p ICMP --icmp-type 11 -s $xxx-j ACCEPT

$IPT -A INPUT -p ICMP --icmp-type 11 -s $home -j ACCEPT



# Catch all rule, all other incoming is denied and logged.

iptables -A INPUT -s $anywhere -d $anywhere -j drop-and-log-it



# Accept outbound packets if you DROP OUTPUT traffic

#$IPT -I OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT

#$IPT -A OUTPUT -p udp --dport 53 -m state --state NEW -j ACCEPT

#$IPT -A OUTPUT -o $extif -s $extip -d $anywhere -j ACCEPT



# anything else outgoing on remote interface is valid

$IPT -A OUTPUT -o $extif -s $extip -d $anywhere -j ACCEPT



# Catch all rule, all other outgoing is denied and logged.

$IPT -A OUTPUT -s $anywhere -d $anywhere -j drop-and-log-it



echo "DONE"

31 Aralık 2010 Cuma

Experiments and fun with the Linux disk cache

Hopefully you are now convinced that Linux didn't just eat your ram. Here are some interesting things you can do to learn how the disk cache works.

Effects of disk cache on application memory allocation

Since I've already promised that disk cache doesn't prevent applications from getting the memory they want, let's start with that. Here is a C app (munch.c) that gobbles up as much memory as it can, or to a specified limit:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>

int main(int argc, char** argv) {
int max = -1;
int mb = 0;
char* buffer;

if(argc > 1)
max = atoi(argv[1]);

while((buffer=malloc(1024*1024)) != NULL && mb != max) {
memset(buffer, 0, 1024*1024);
mb++;
printf("Allocated %d MB\n", mb);
}

return 0;
}
Running out of memory isn't fun, but the OOM killer should end just this process and hopefully the rest will be unperturbed. We'll definitely want to disable swap for this, or the app will gobble up that as well.
$ sudo swapoff -a

$ free -m
total used free shared buffers cached
Mem: 1504 1490 14 0 24 809
-/+ buffers/cache: 656 848
Swap: 0 0 0

$ gcc munch.c -o munch

$ ./munch
Allocated 1 MB
Allocated 2 MB
(...)
Allocated 877 MB
Allocated 878 MB
Allocated 879 MB
Killed

$ free -m
total used free shared buffers cached
Mem: 1504 650 854 0 1 67
-/+ buffers/cache: 581 923
Swap: 0 0 0

$
Even though it said 14MB "free", that didn't stop the application from grabbing 879MB. Afterwards, the cache is pretty empty, but it will gradually fill up again as files are read and written. Give it a try.

Effects of disk cache on swapping

I also said that disk cache won't cause applications to use swap. Let's try that as well, with the same 'munch' app as in the last experiment. This time we'll run it with swap on, and limit it to a few hundred megabytes:
$ free -m
total used free shared buffers cached
Mem: 1504 1490 14 0 10 874
-/+ buffers/cache: 605 899
Swap: 2047 6 2041

$ ./munch 400
Allocated 1 MB
Allocated 2 MB
(...)
Allocated 399 MB
Allocated 400 MB

$ free -m
total used free shared buffers cached
Mem: 1504 1090 414 0 5 485
-/+ buffers/cache: 598 906
Swap: 2047 6 2041
munch ate 400MB of ram, which was taken from the disk cache without resorting to swap. Likewise, we can fill the disk cache again and it will not start eating swap either. If you run watch free -m in one terminal, and find . -type f -exec cat {} + > /dev/null in another, you can see that "cached" will rise while "free" falls. After a while, it tapers off but swap is never touched1

Clearing the disk cache

For experimentation, it's very convenient to be able to drop the disk cache. For this, we can use the special file /proc/sys/vm/drop_caches. By writing 3 to it, we can clear most of the disk cache:
$ free -m
total used free shared buffers cached
Mem: 1504 1471 33 0 36 801
-/+ buffers/cache: 633 871
Swap: 2047 6 2041

$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ free -m
total used free shared buffers cached
Mem: 1504 763 741 0 0 134
-/+ buffers/cache: 629 875
Swap: 2047 6 2041
Notice how "buffers" and "cached" went down, free mem went up, and free+buffers/cache stayed the same.

Effects of disk cache on load times

Let's make two test programs, one in Python and one in Java. Python and Java both come with pretty big runtimes, which have to be loaded in order to run the application. This is a perfect scenario for disk cache to work its magic.
$ cat hello.py
print "Hello World! Love, Python"

$ cat Hello.java
class Hello {
public static void main(String[] args) throws Exception {
System.out.println("Hello World! Regards, Java");
}
}

$ javac Hello.java

$ python hello.py
Hello World! Love, Python

$ java Hello
Hello World! Regards, Java

$
Our hello world apps work. Now let's drop the disk cache, and see how long it takes to run them.
$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ time python hello.py
Hello World! Love, Python

real 0m1.026s
user 0m0.020s
sys 0m0.020s

$ time java Hello
Hello World! Regards, Java

real 0m2.174s
user 0m0.100s
sys 0m0.056s

$
Wow. 1 second for Python, and 2 seconds for Java? That's a lot just to say hello. However, now all the file required to run them will be in the disk cache so they can be fetched straight from memory. Let's try again:
$ time python hello.py
Hello World! Love, Python

real 0m0.022s
user 0m0.016s
sys 0m0.008s

$ time java Hello
Hello World! Regards, Java

real 0m0.139s
user 0m0.060s
sys 0m0.028s

$
Yay! Python now runs in just 22 milliseconds, while java uses 139ms. That's a 95% improvement! This works the same for every application!

Effects of disk cache on file reading

Let's make a big file and see how disk cache affects how fast we can read it. I'm making a 200mb file, but if you have less free ram, you can adjust it.
$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ free -m
total used free shared buffers cached
Mem: 1504 546 958 0 0 85
-/+ buffers/cache: 461 1043
Swap: 2047 6 2041

$ dd if=/dev/zero of=bigfile bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 6.66191 s, 31.5 MB/s

$ ls -lh bigfile
-rw-r--r-- 1 vidar vidar 200M 2009-04-25 12:30 bigfile

$ free -m
total used free shared buffers cached
Mem: 1504 753 750 0 0 285
-/+ buffers/cache: 468 1036
Swap: 2047 6 2041

$
Since the file was just written, it will go in the disk cache. The 200MB file caused a 200MB bump in "cached". Let's read it, clear the cache, and read it again to see how fast it is:
$ time cat bigfile > /dev/null

real 0m0.139s
user 0m0.008s
sys 0m0.128s

$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ time cat bigfile > /dev/null

real 0m8.688s
user 0m0.020s
sys 0m0.336s

$
That's more than fifty times faster!

Conclusions

The Linux disk cache is very unobtrusive. It uses spare memory to greatly increase disk access speeds, and without taking any memory away from applications. A fully used store of ram on Linux is efficient hardware use, not a warning sign.
LinuxAteMyRam.com was presented by VidarHolen.net

1. This is somewhat oversimplified. While newly allocated memory will always be taken from the disk cache instead of swap, Linux can be configured to preemptively swap out other unused applications in the background to free up memory for cache. The is tunable through the 'swappiness' setting, accessible through /proc/sys/vm/swappiness.

A server might want to swap out unused apps to speed up disk access of running ones (making the system faster), while a desktop system might want to keep apps in memory to prevent lag when the user finally uses them (making the system more responsive). This is the subject of much debate.

linux ate my RAM

What's going on?

Linux is borrowing unused memory for disk caching. This makes it looks like you are low on memory, but you are not! Everything is fine!

Why is it doing this?

Disk caching makes the system much faster! There are no downsides, except for confusing newbies. It does not take memory away from applications in any way, ever!



What if I want to run more applications?

If your applications want more memory, they just take back a chunk that the disk cache borrowed. Disk cache can always be given back to applications immediately! You are not low on ram!

Do I need more swap?

No, disk caching only borrows the ram that applications don't currently want. It will not use swap. If applications want more memory, they just take it back from the disk cache. They will not start swapping.

How do I stop Linux from doing this?

You can't disable disk caching. The only reason anyone ever wants to disable disk caching is because they think it takes memory away from their applications, which it doesn't! Disk cache makes applications load faster and run smoother, but it NEVER EVER takes memory away from them! Therefore, there's absolutely no reason to disable it!

Why does top and free say all my ram is used if it isn't?

This is just a misunderstanding of terms. Both you and Linux agrees that memory taken by applications is "used", while memory that isn't used for anything is "free". But what do you call memory that is both used for something and available for applications?

You would call that "free", but Linux calls it "used".






Memory that isYou'd call itLinux calls it
taken by applications Used Used
available for applications, and used for something Free Used
not used for anything Free Free
This "something" is what top and free calls "buffers" and "cached". Since your and Linux's terminology differs, you think you are low on ram when you're not.

How do I see how much free ram I really have?

Too see how much ram is free to use for your applications, run free -m and look at the row that says "-/+ buffers/cache" in the column that says "free". That is your answer in megabytes:

$ free -m
total used free shared buffers cached
Mem: 1504 1491 13 0 91 764
-/+ buffers/cache: 635 869
Swap: 2047 6 2041
$
If you don't know how to read the numbers, you'll think the ram is 99% full when it's really just 42%.

How can I verify these things?

See this page for more details and how you can experiment with disk cache.

29 Aralık 2010 Çarşamba

Troubleshooting Linux with syslog

Introduction

There are hundreds of Linux applications on the market, each with their own configuration files and help pages. This variety makes Linux vibrant, but it also makes Linux system administration daunting. Fortunately, in most cases, Linux applications use the syslog utility to export all their errors and status messages to files located in the /var/log directory.

This can be invaluable in correlating the timing and causes of related events on your system. It is also important to know that applications frequently don't display errors on the screen, but will usually log them somewhere. Knowing the precise message that accompanies an error can be vital in researching malfunctions in product manuals, online documentation, and Web searches.

syslog, and the logrotate utility that cleans up log files, are both relatively easy to configure but they frequently don't get their fair share of coverage in most texts. I've included syslog here as a dedicated chapter to both emphasize its importance to your Linux knowledge and prepare you with a valuable skill that will help you troubleshoot all the Linux various applications that will be presented throughout the book

syslog 

 

syslog is a utility for tracking and logging all manner of system messages from the merely informational to the extremely critical. Each system message sent to the syslog server has two descriptive labels associated with it that makes the message easier to handle.

  • The first describes the function (facility) of the application that generated it. For example, applications such as mail and cron generate messages with easily identifiable facilities named mail and cron.

  • The second describes the degree of severity of the message. There are eight in all and they are listed in Table 5-1:

You can configure syslog's /etc/rsyslog.conf configuration file to place messages of differing severities and facilities in different files. This procedure will be covered next.





Table 5-1 Syslog Facilities











Severity Level Keyword Description
0 emergencies System unusable
1 alerts Immediate action required
2 critical Critical condition
3 errors Error conditions
4 warnings Warning conditions
5 notifications Normal but significant conditions
6 informational Informational messages
7 debugging Debugging messages

The /etc/rsyslog.conf File

The files to which syslog writes each type of message received is set in the /etc/rsyslog.conf configuration file. In older versions of Fedora this file was named /etc/syslog.conf.

This file consists of two columns. The first lists the facilities and severities of messages to expect and the second lists the files to which they should be logged. By default, RedHat/Fedora's /etc/rsyslog.conf file is configured to put most of the messages in the file /var/log/messages. Here is a sample:

*.info;mail.none;authpriv.none;cron.none           /var/log/messages
In this case, all messages of severity "info" and above are logged, but none from the mail, cron or authentication facilities/subsystems. You can make this logging even more sensitive by replacing the line above with one that captures all messages from debug severity and above in the /var/log/messages file. This example may be more suitable for troubleshooting.

*.debug                                          /var/log/messages
In this example, all debug severity messages; except auth, authpriv, news and mail; are logged to the /var/log/debug file in caching mode. Notice how you can spread the configuration syntax across several lines using the slash (\) symbol at the end of each line.

*.=debug;\
auth,authpriv.none;\
news.none;mail.none -/var/log/debug
Here we see the /var/log/messages file configured in caching mode to receive only info, notice and warning messages except for the auth, authpriv, news and mail facilities.

*.=info;*.=notice;*.=warn;\
auth,authpriv.none;\
cron,daemon.none;\
mail,news.none -/var/log/messages
You can even have certain types of messages sent to the screen of all logged in users. In this example messages of severity emergency and above triggers this type of notification. The file definition is simply replaced by an asterisk to make this occur.

*.emerg                         *
Certain applications will additionally log to their own application specific log files and directories independent of the syslog.conf file. Here are some common examples:

Files:

/var/log/maillog             : Mail
/var/log/httpd/access_log  : Apache web server page access logs
Directories:

/var/log
/var/log/samba  : Samba messages
/var/log/mrtg  : MRTG messages
/var/log/httpd  : Apache webserver messages
Note: In some older versions of Linux the /etc/rsyslog.conf file was very sensitive to spaces and would recognize only tabs. The use of spaces in the file would cause unpredictable results. Check the formatting of your /etc/rsyslog.conf file to be safe.

Activating Changes to the syslog Configuration File

Changes to /etc/rsyslog.conf will not take effect until you restart syslog. Issue this command to do so:

[root@bigboy tmp]# service rsyslog restart
In older versions of Fedora, this would be:

[root@bigboy tmp]# service syslog restart
This is slightly different with Ubuntu / Debian systems:

root@u-bigboy:~# /etc/init.d/sysklogd restart

How to View New Log Entries as They Happen

If you want to get new log entries to scroll on the screen as they occur, then you can use this command:

[root@bigboy tmp]# tail -f /var/log/messages
Similar commands can be applied to all log files. This is probably one of the best troubleshooting tools available in Linux. Another good command to use apart from tail is grep. grep will help you search for all occurrences of a string in a log file; you can pipe it through the more command so that you only get one screen at a time. Here is an example:

[root@bigboy tmp]# grep string /var/log/messages | more
You can also just use the plain old more command to see one screen at a time of the entire log file without filtering with grep. Here is an example:

[root@bigboy tmp]# more /var/log/messages




Logging syslog Messages to a Remote Linux Server

Logging your system messages to a remote server is a good security practice. With all servers logging to a central syslog server, it becomes easier to correlate events across your company. It also makes covering up mistakes or malicious activities harder because the purposeful deletion of log files on a server cannot simultaneously occur on your logging server, especially if you restrict the user access to the logging server.





Configuring the Linux Syslog Server

By default syslog doesn't expect to receive messages from remote clients. Here's how to configure your Linux server to start listening for these messages.

As we saw previously, syslog checks its /etc/rsyslog.conf file to determine the expected names and locations of the log files it should create. It also checks the file /etc/sysconfig/syslog to determine the various modes in which it should operate. Syslog will not listen for remote messages unless the SYSLOGD_OPTIONS variable in this file has a -r included in it as shown below.

# Options to syslogd
# -m 0 disables 'MARK' messages.
# -r enables logging from remote machines
# -x disables DNS lookups on messages received with -r
# See syslogd(8) for more details

SYSLOGD_OPTIONS="-m 0 -r"

# Options to klogd
# -2 prints all kernel oops messages twice; once for klogd to decode, and
# once for processing with 'ksymoops'
# -x disables all klogd processing of oops messages entirely
# See klogd(8) for more details

KLOGD_OPTIONS="-2"
Note: In Debian / Ubuntu systems you have to edit the syslog startup script /etc/init.d/sysklogd directly and make the SYSLOGD variable definition become "-r".

# Options for start/restart the daemons
# For remote UDP logging use SYSLOGD="-r"
#
#SYSLOGD="-u syslog"
SYSLOGD="-r"
You will have to restart syslog on the server for the changes to take effect. The server will now start to listen on UDP port 514, which you can verify using either one of the following netstat command variations.

[root@bigboy tmp]# netstat -a | grep syslog
udp 0 0 *:syslog *:*
[root@bigboy tmp]# netstat -an | grep 514
udp 0 0 0.0.0.0:514 0.0.0.0:*
[root@bigboy tmp]#

Configuring the Linux Client

The syslog server is now expecting to receive syslog messages. You have to configure your remote Linux client to send messages to it. This is done by editing the /etc/hosts file on the Linux client named smallfry. Here are the steps:

1) Determine the IP address and fully qualified hostname of your remote logging host.

2) Add an entry in the /etc/hosts file in the format:

IP-address    fully-qualified-domain-name    hostname    "loghost"
Example:

192.168.1.100    bigboy.my-site.com    bigboy     loghost
Now your /etc/hosts file has a nickname of "loghost" for server bigboy.

3) The next thing you need to do is edit your /etc/rsyslog.conf file to make the syslog messages get sent to your new loghost nickname.

*.debug                                       @loghost
*.debug /var/log/messages
You have now configured all debug messages and higher to be logged to both server bigboy ("loghost") and the local file /var/log/messages. Remember to restart syslog to get the remote logging started.

You can now test to make sure that the syslog server is receiving the messages with a simple test such as restarting the lpd printer daemon and making sure the remote server sees the messages.

Linux Client

[root@smallfry tmp]# service lpd restart
Stopping lpd: [ OK ]
Starting lpd: [ OK ]
[root@smallfry tmp]#
Linux Server

[root@bigboy tmp]# tail /var/log/messages
...
...
Apr 11 22:09:35 smallfry lpd: lpd shutdown succeeded
Apr 11 22:09:39 smallfry lpd: lpd startup succeeded
...
...
[root@bigboy tmp]#

Syslog Configuration and Cisco Network Devices

syslog reserves facilities "local0" through "local7" for log messages received from remote servers and network devices. Routers, switches, firewalls and load balancers each logging with a different facility can each have their own log files for easy troubleshooting. Appendix 4 has examples of how to configure syslog to do this with Cisco devices using separate log files for the routers, switches, PIX firewalls, CSS load balancers and LocalDirectors.





Logrotate

The Linux utility logrotate renames and reuses system error log files on a periodic basis so that they don't occupy excessive disk space.





The /etc/logrotate.conf File

This is logrotate's general configuration file in which you can specify the frequency with which the files are reused.

  • You can specify either a weekly or daily rotation parameter. In the case below the weekly option is commented out with a #, allowing for daily updates.

  • The rotate parameter specifies the number of copies of log files logrotate will maintain. In the case below the 4 copy option is commented out with a #, while allowing 7 copies.

  • The create parameter creates a new log file after each rotation

Therefore, our sample configuration file will create daily archives of all the logfiles and store them for seven days. The files will have the following names with, logfile being current active version:

logfile
logfile.0
logfile.1
logfile.2
logfile.3
logfile.4
logfile.5
logfile.6




Sample Contents of /etc/logrotate.conf

# rotate log files weekly
#weekly

# rotate log files daily
daily

# keep 4 weeks worth of backlogs
#rotate 4

# keep 7 days worth of backlogs
rotate 7

# create new (empty) log files after rotating old ones
create




The /etc/logrotate.d Directory

Most Linux applications that use syslog will put an additional configuration file in this directory to specify the names of the log files to be rotated. It is a good practice to verify that all new applications that you want to use the syslog log have configuration files in this directory. Here are some sample files that define the specific files to be rotated for each application.

Here is an example of a custom file located in this directory that rotates files with the .tgz extension which are located in the /data/backups directory. The parameters in this file will override the global defaults in the /etc/logrotate.conf file. In this case, the rotated files won't be compressed, they'll be held for 30 days only if they are not empty, and they will be given file permissions of 600 for user root.

/data/backups/*.tgz {

daily
rotate 30
nocompress
missingok
notifempty
create 0600 root root
}


Note: In Debian / Ubuntu systems the /etc/cron.daily/sysklogd script reads the /etc/rsyslog.conf file and rotates any log files it finds configured there. This eliminates the need to create log rotation configuration files for the common system log files in the /etc/logrotate.d directory. As the script resides in the /etc/cron.daily directory it automatically runs every 24 hours. In Fedora / Redhat systems this script is replaced by the /etc/cron.daily/logrotate daily script which does not use the contents of the syslog configuration file, relying mostly on the contents of the /etc/logrotate.d directory.

Activating logrotate

The above logrotate settings in the previous section will not take effect until you issue the following command:

[root@bigboy tmp]# logrotate -f
If you want logrotate to reload only a specific configuration file, and not all of them, then issue the logrotate command with just that filename as the argument like this:

[root@bigboy tmp]# logrotate -f /etc/logrotate.d/syslog

Compressing Your Log Files

On busy Web sites the size of your log files can become quite large. Compression can be activated by editing the logrotate.conf file and adding the compress option.

#
# File: /etc/logrotate.conf
#

# Activate log compression

compress
The log files will then start to become archived with the gzip utility, each file having a .gz extension.

[root@bigboy tmp]# ls /var/log/messages*
/var/log/messages /var/log/messages.1.gz /var/log/messages.2.gz
/var/log/messages.3.gz /var/log/messages.4.gz /var/log/messages.5.gz
/var/log/messages.6.gz /var/log/messages.7.gz
[root@bigboy tmp]#
Viewing the contents of the files still remains easy because the zcat command can quickly output their contents to the screen. Use the command with the compressed file's name as the argument as seen below.

[root@bigboy tmp]# zcat /var/log/messages.1.gz
...
...
Nov 15 04:08:02 bigboy httpd: httpd shutdown succeeded
Nov 15 04:08:04 bigboy httpd: httpd startup succeeded
Nov 15 04:08:05 bigboy sendmail[6003]: iACFMLHZ023165: to=<tvaughan@clematis4spiders.info>,
delay=2+20:45:44, xdelay=00:00:02, mailer=esmtp, pri=6388168,
relay=www.clematis4spiders.info. [222.134.66.34], dsn=4.0.0,
stat=Deferred: Connection refused by www.clematis4spiders.info.
[root@bigboy tmp]#

syslog-ng

The more recent syslog-ng application combines the features of logrotate and syslog to create a much more customizable and feature rich product. This can be easily seen in the discussion of its configuration file that follows.

The /etc/syslog-ng/syslog-ng.conf file

The main configuration file for syslog-ng is the /etc/syslog-ng/sylog-ng.conf file but only rudimentary help on its keywords can be found using the Linux man pages.

[root@bigboy tmp]# man syslog-ng.conf
Don’t worry, we’ll soon explore how much more flexible syslog-ng can be when compared to regular syslog. 

Simple Server Side Configuration for Remote Clients

Figure 5-1 has a sample syslog-ng.conf file and outlines some key features. The options section that covers global characteristics is fully commented, but it is the source, destination and log sections that define the true strength of the customizability of syslog-ng.





Figure 5-1 A Sample syslog-ng.conf File

options {

# Number of syslog lines stored in memory before being written to files
sync (0);

# Syslog-ng uses queues
log_fifo_size (1000);

# Create log directories as needed
create_dirs (yes);

# Make the group "logs" own the log files and directories
group (logs);
dir_group (logs);

# Set the file and directory permissions
perm (0640);
dir_perm (0750);

# Check client hostnames for valid DNS characters
check_hostname (yes);

# Specify whether to trust hostname in the log message.
# If "yes", then it is left unchanged, if "no" the server replaces
# it with client's DNS lookup value.
keep_hostname (yes);

# Use DNS fully qualified domain names (FQDN)
# for the names of log file folders
use_fqdn (yes);
use_dns (yes);

# Cache DNS entries for up to 1000 hosts for 12 hours
dns_cache (yes);
dns_cache_size (1000);
dns_cache_expire (43200);

};


# Define all the sources of localhost generated syslog
# messages and label it "d_localhost"
source s_localhost {
pipe ("/proc/kmsg" log_prefix("kernel: "));
unix-stream ("/dev/log");
internal();
};

# Define all the sources of network generated syslog
# messages and label it "d_network"
source s_network {
tcp(max-connections(5000));
udp();
};

# Define the destination "d_localhost" log directory
destination d_localhost {
file ("/var/log/syslog-ng/$YEAR.$MONTH.$DAY/localhost/$FACILITY.log");
};

# Define the destination "d_network" log directory
destination d_network {
file ("/var/log/syslog-ng/$YEAR.$MONTH.$DAY/$HOST/$FACILITY.log");
};

# Any logs that match the "s_localhost" source should be logged
# in the "d_localhost" directory

log { source(s_localhost);
destination(d_localhost);
};

# Any logs that match the "s_network" source should be logged
# in the "d_network" directory

log { source(s_network);
destination(d_network);
};


In our example, the first set of sources is labeled s_localhost. It includes all system messages sent to the Linux /dev/log device, which is one of syslog's data sources, all messages that syslog-ng views as being of an internal nature and additionally inserts the prefix "kernel" to all messages it intercepts on their way to the /proc/kmsg kernel message file.

Unlike a regular syslog server which listens for client messages on UDP port 514, syslog-ng also listens on TCP port 514. The second set of sources is labeled s_network and includes all syslog messages obtained from UDP sources and limits TCP syslog connections to 5000. Limiting the number of connections to help regulate system load is a good practice in the event that some syslog client begins to inundate your server with messages.

Our example also has two destinations for syslog messages, one named d_localhost, the other, d_network. These examples show the flexibility of syslog-ng in using variables. The $YEAR, $MONTH and $DAY variables map to the current year, month and day in YYYY, MM and DD format respectively. Therefore the example:

/var/log/syslog-ng/$YEAR.$MONTH.$DAY/$HOST/$FACILITY.log
refers to a directory called /var/log/syslog-ng/2005.07.09 when messages arrive on July 9, 2005. The $HOST variable refers to the hostname of the syslog client and will map to the client's IP address if DNS services are deactivated in the options section of the syslog-ng.conf file. Similarly the $FACILITY variable refers to the facility of the syslog messages that arrive from that host.

Using syslog-ng in Large Data Centers

Figure 5-2 has a sample syslog-ng.conf file snippet that defines some additional features that may be of interest in a data center environment.

Figure 5-2 More Specialized syslog-ng.conf Configuration

options {

# Number of syslog lines stored in memory before being written to files
sync (100);
};


# Define all the sources of network generated syslog
# messages and label it "s_network_1"
source s_network_1 {
udp(ip(192.168.1.201) port(514));
};

# Define all the sources of network generated syslog
# messages and label it "s_network_2"
source s_network_2 {
udp(ip(192.168.1.202) port(514));
};

# Define the destination "d_network_1" log directory
destination d_network_1 {
file ("/var/log/syslog-ng/servers/$YEAR.$MONTH.$DAY/$HOST/$FACILITY.log");
};

# Define the destination "d_network_2" log directory
destination d_network_2 {
file ("/var/log/syslog-ng/network/$YEAR.$MONTH.$DAY/$HOST/$FACILITY.log");
};

# Define the destination "d_network_2B" log directory
destination d_network_2B {
file ("/var/log/syslog-ng/network/all/network.log");
};

# Any logs that match the "s_network_1" source should be logged
# in the "d_network_1" directory

log { source(s_network_1);
destination(d_network_1);
};

# Any logs that match the "s_network_2" source should be logged
# in the "d_network_2" directory

log { source(s_network_2);
destination(d_network_2);
};

# Any logs that match the "s_network_2" source should be logged
# in the "d_network_2B" directory also

log { source(s_network_2);
destination(d_network_2B);
};
In this case we have configured syslog to:

  1. Listen on IP address 192.168.1.201 as defined in the source s_network_1. Messages arriving at this address will be logged to a subdirectory of /var/log/syslog-ng/servers/ arranged by date as specified by destination d_network_1. As you can guess, this address and directory be used by all servers in the data center.

  2. Listen on IP address 192.168.1.202 as defined in the source s_network_2. Messages arriving at this address will be logged to a subdirectory of /var/log/syslog-ng/network/ arranged by date as specified by d_network_2. This will be the IP address and directory to which network devices would log.

  3. Listen on IP address 192.168.1.202 as defined in the source s_network_2. Messages arriving at this address will also be logged to file /var/log/syslog-ng/all/debug.log as part of destination d_network_2B.This will be a single file to which all network devices would log. Server failures are usually isolated to single servers whereas network failures are more likely to be cascading involving many devices. The advantage of searching a single file is that it makes it easier to determine the exact sequence of events.

  4. As there could be many devices logging to the syslog-ng server, the sync option is set to write data to disk only after receiving 100 syslog messages. Constant receipt of syslog messages can have a significant impact on your system’s disk performance. This option allows you to queue the messages in memory for less frequent disk updates.

Now that you have an understanding of how to configure syslog-ng it’s time to see how you install it.

Installing syslog-ng

You can install syslog-ng using one of two methods depending on your version of Linux.

Using RPM Files

The syslog-ng and rsyslog packages cannot be installed at the same time. You have to uninstall one in order for the other to work. Here’s how you can install syslog-ng using RPM package files. 1. Uninstall rsyslog using the rpm command. There are some other RPMs that rely on rsyslog so you will have to do this while ignoring any dependencies with the –nodeps flag.

[root@bigboy tmp]# rpm -e --nodeps rsyslog
2. Install syslog-ng using yum.

[root@bigboy tmp]# yum -y install syslog-ng
3. Start the new syslog-ng daemon immediately and make sure it will start on the next reboot.

[root@bigboy tmp]# chkconfig syslog-ng on
[root@bigboy tmp]# service syslog-ng start
Starting syslog-ng: [ OK ]
[root@bigboy tmp]#
Your new syslog-ng package is now up and running and ready to go!

Using tar files

The most recent syslog-ng and its companion eventlog tar files can be downloaded from the www.balabit.com website. The installation procedure is straightforward, but you will need to have the Linux gcc C programming language compiler preinstalled to be successful. Here are the steps.

1. Download the tar files from the BalaBit website. In this case we have browsed the website beforehand and know the exact URLs to use with the wget command.

[root@zippy tmp]# wget http://www.balabit.com/downloads/syslog-ng/2.0/src/eventlog-0.2.5.tar.gz
--12:34:17-- wget http://www.balabit.com/downloads/syslog-ng/2.0/src/eventlog-0.2.5.tar.gz
=> `eventlog-0.2.5.tar.gz'
...
...
...

12:34:19 (162.01 KB/s) - `eventlog-0.2.5.tar.gz' saved [345231]

[root@zippy tmp]# wget http://www.balabit.com/downloads/syslog-ng/2.0/src/syslog-ng-2.0.0.tar.gz
--12:24:21-- wget http://www.balabit.com/downloads/syslog-ng/2.0/src/syslog-ng-2.0.0.tar.gz
=> ` syslog-ng-2.0.0.tar.gz'
...
...
...

12:24:24 (156.15 KB/s) - ` syslog-ng-2.0.0.tar.gz' saved [383589]

[root@zippy tmp]#
2. Install the prerequisite glib libraries.

[root@zippy tmp]# yum -y install glib
3. Using the tar command we extract the files in the pre-requisite eventlog archive and then use the configure; make and make install commands to install them correctly. Pay special attention to the output of the configure command to make sure that all the pre-installation tests are passed. If not, install the packages the error messages request and then start again.

[root@zippy tmp]# tar -xzf eventlog-0.2.5.tar.gz
[root@zippy tmp]# cd eventlog-0.2.5
[root@zippy eventlog-0.2.5]# ./configure
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
...
...
...
[root@zippy eventlog-0.2.5]# make
Making all in utils
make[1]: Entering directory `/tmp/eventlog-0.2.5/utils'
sed -e "s,_SCSH_,/usr/bin/scsh," make_class.in >make_class
...
...
...
[root@zippy eventlog-0.2.5]# make install
Making install in utils
make[1]: Entering directory `/tmp/eventlog-0.2.5/utils'
make[2]: Entering directory `/tmp/eventlog-0.2.5/utils'
...
...
...
make[2]: Leaving directory `/tmp/eventlog-0.2.5'
make[1]: Leaving directory `/tmp/eventlog-0.2.5'
[root@zippy eventlog-0.2.5]#
4. The next step is to install the prerequisite glib package on your system.

[root@zippy eventlog-0.2.5]# yum -y install glib
5. Some environmental variables also need to be set prior to the installation of the syslog-ng files.

[root@zippy eventlog-0.2.5]# PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
[root@zippy eventlog-0.2.5]# export PKG_CONFIG_PATH
6. Using the tar command we extract the files in the pre-requisite syslog-ng archive and then use the configure, make clean, make and make install commands to install them correctly. In this case we the --sysconfdir directive with the configure command to make sure syslog-ng searches for its configuration file in the /etc directory. Once again, pay close attention to the pre-installation tests that the configure command executes.

[root@zippy eventlog-0.2.5]# cd /tmp
[root@zippy tmp]# tar -xzf syslog-ng-2.0.0.tar.gz
[root@zippy tmp]# cd syslog-ng-2.0.0
[root@zippy syslog-ng-2.0.0]# make clean
[root@zippy syslog-ng-2.0.0]# ./configure --sysconfdir=/etc
checking for a BSD-compatible install... /usr/bin/install -c
checking whether build environment is sane... yes
...
...
...
[root@zippy syslog-ng-2.0.0]# make; make install
Making all in src
make[1]: Entering directory `/tmp/ syslog-ng-2.0.0/src'
...
...
...
[root@zippy syslog-ng-2.0.0]#
7. The installation has template init.d/syslog-ng scripts and syslog-ng.conf files in the contribs/ directory.

[root@zippy syslog-ng-2.0.0]# ls contrib/
fedora-packaging init.d.RedHat-7.3 init.d.SuSE
Makefile.in rhel-packaging syslog-ng.conf.HP-UX
syslog-ng.vim init.d.HP-UX init.d.solaris
Makefile README syslog2ng
init.d.RedHat syslog-ng.conf.RedHat init.d.SunOS
Makefile.am relogger.pl syslog-ng.conf.doc
syslog-ng.conf.SunOS
[root@zippy syslog-ng-2.0.0]#
8. Copy the versions for your operating system to the /etc/init.d and /etc , /etc/logrotate.d , /etc/sysconfig directories. The /etc/syslog-ng/ directory needs to be created beforehand. Redhat and Fedora installations have their own subdirectories contrib/.

[root@zippy syslog-ng-2.0.0]# mkdir /etc/syslog-ng/
[root@zippy syslog-ng-2.0.0]# cp contrib/fedora-packaging/syslog-ng.init \
/etc/init.d/syslog-ng
[root@zippy syslog-ng-2.0.0]# cp contrib/fedora-packaging/syslog-ng.conf \
/etc
[root@zippy syslog-ng-2.0.0]# cp contrib/fedora-packaging/syslog-ng.sysconfig \
/etc/sysconfig/syslog-ng
[root@zippy syslog-ng-2.0.0]# cp contrib/fedora-packaging/syslog-ng.logrotate \
/etc/logrotate.d/syslog-ng
Remember that you may want to customize your syslog-ng.conf file.

9. Change the permissions on your new /etc/inid.d/syslog-ng file.

[root@zippy syslog-ng-2.0.0]# chmod 755 /etc/init.d/syslog-ng


10. You need to be careful. The init.d script may refer to a syslog-ng binary file that's in an incorrect location. Find its true location and edit the script.

[root@zippy syslog-ng-2.0.0]# updatedb
[root@zippy syslog-ng-2.0.0]# locate syslog-ng | grep bin
/usr/local/sbin/syslog-ng
[root@zippy syslog-ng-2.0.0]# vi /etc/init.d/syslog-ng
...
#exec="/sbin/syslog-ng"
exec="/usr/local/sbin/syslog-ng"
...
:wq
[root@zippy syslog-ng-2.0.0]#
11. Next create the /etc/syslog-ng directory for the configuration files and the /var/log/syslog-ng directory for the log files.

[root@zippy syslog-ng-2.0.0]# chkconfig syslog off
[root@zippy syslog-ng-2.0.0]# chkconfig syslog-ng on
[root@zippy syslog-ng-2.0.0]# service syslog stop
Shutting down kernel logger: [ OK ]
Shutting down system logger: [ OK ]
[root@zippy syslog-ng-2.0.0]# service syslog-ng start
syslog-ng: unrecognized service
[root@zippy syslog-ng-2.0.0]#
12. The sample syslog-ng.conf file in Figure 5-1 was configured to have all directories owned by the group logs. This user group needs to be created and any users that need access to the directories need to added to this group using the usermod command. In this case the user peter is added to the group and the groups command is used to verify success.

[root@zippy tmp]# groupadd logs
[root@zippy tmp]# usermod -G logs peter
[root@zippy tmp]# groups peter
peter: users logs
[root@zippy tmp]# usermod -G logs peter
13. You can now configure syslog-ng to start on the next reboot with the chkconfig command and then use the service command to start it immediately. Remember to stop the old syslog process beforehand.

[root@zippy tmp]# service syslog stop
Shutting down kernel logger: [ OK ]
Shutting down system logger: [ OK ]
[root@zippy tmp]# chkconfig syslog off
[root@zippy tmp]# chkconfig syslog-ng on
[root@zippy tmp]# service syslog-ng start
Starting system logger: [ OK ]
Starting kernel logger: [ OK ]
[root@zippy tmp]#
14. Now, your remote hosts should log begin logging to the /var/log/syslog-ng directory. According to our preliminary configuration file, there should be sub-directories categorized by date inside it. Each of these sub-directories in turn will have directories beneath them named after the IP address and/or hostname of the various remote syslog clients and will contain files categorized by syslog facility. In this example we see that the 2005.07.09 directory as received messages from three hosts, 192.168.1.1, 192.168.1.100 and localhost.

[root@zippy tmp]# ls /var/log/syslog-ng/
2005.07.09
[root@zippy tmp]# ll /var/log/syslog-ng/2005.07.09/
drwxr-x--- 2 root logs 4096 Jul 9 17:01 192-168-1-1.my-web-site.org
drwxr-x--- 2 root logs 4096 Jul 9 16:45 192-168-1-99.my-web-site.org
drwxr-x--- 2 root logs 4096 Jul 9 23:24 LOGGER
[root@zippy tmp]# ls /var/log/syslog-ng/2005.07.09/localhost/
cron.log kern.log local7.log syslog.log
[root@zippy tmp]#
Using syslog-ng your system can now be used as a much more customizable tool to help troubleshoot devices attached to your network. Each day syslog-ng will automatically create new sub-directories to match the current date and at the end of each calendar quarter the files will be moved to a special archive directory containing all the data for the previous three months. This archived data can then be periodically deleted as needed. For very large deployments, or for better searching and correlation capabilities, it is possible to send the output of syslog-ng to a SQL type database. This is beyond the scope of this book, but it is a worthwhile feature to keep in mind.

Configuring syslog-ng Clients

Clients logging to the syslog-ng server don't need to have syslog-ng installed on them, a regular syslog client configuration will suffice.

If you are running syslog-ng on clients, then you’ll need to modify your configuration file. Let’s look at Example 5-1 – Syslog-ng Sample Client Configuration.

Example 5-1 - Syslog-ng Sample Client Configuration

source s_sys {
file ("/proc/kmsg" log_prefix("kernel: "));
unix-stream ("/dev/log");
internal();
};

destination loghost {
udp("loghost.linuxhomenetworking.com");
};

filter notdebug {
level(info...emerg);
};

log {
source(local);
filter(notdebug);
destination(loghost);
};
The s_sys source comes default in many syslong-ng.conf files, we have just added some additional parameters to make it work. Here the destination syslog logging server is defined as loghost.linuxhomenetworking.com. We have also added a filter to the log section to make sure only the most urgent messages, info level and above (not debug), get logged to the remote server. After restarting syslong-ng on your client, your syslog server will start receiving messages.

Simple syslog Security

One of the shortcomings of a syslog server is that it doesn't filter out messages from undesirable sources. It is therefore wise to implement the use of TCP wrappers or a firewall to limit the acceptable sources of messages when your server isn't located on a secure network. This will help to limit the effectiveness of syslog based denial of service attacks aimed at filling up your server's hard disk or taxing other system resources that could eventually cause the server to crash.

Remember that regular syslog servers listen on UDP port 514 and syslog-ng servers rely on port 514 for both UDP and TCP. Please refer to Chapter 14, "Linux Firewalls Using iptables", on Linux firewalls for details on how to configure the Linux iptables firewall application and Appendix I, "Miscellaneous Linux Topics", for further information on configuring TCP wrappers.

Conclusion

In the next chapter we cover the installation of Linux applications, and the use of syslog will become increasingly important especially in the troubleshooting of Linux-based firewalls which can be configured to ignore and then log all undesirable packets; the Apache Web server which logs all application programming errors generated by some of the popular scripting languages such as PERL and PHP; and finally, Linux mail whose configuration files are probably the most frequently edited system documents of all and which correspondingly suffer from the most mistakes.

This syslog chapter should make you more confident to learn more about these applications via experimentation because you'll at least know where to look at the first sign of trouble.

16 Aralık 2010 Perşembe

check hp psp logs

The command to view iLO IML are

# hplog -v

# hpasmcli> show iml

Solaris Performance tools

hardware details : prtdiag -v 





check Total physical memory:

# prtdiag -v | grep Memory

# prtconf | grep Memory

# sar -r 5 10

Free Memory=freemen*8 (pagesize=8k)

# vmstat 5 10

Free Memory = free

For swap:

# swap -s

# swap -l







memory usage :

vmstat 5 -look at memory/free field and page/sr field. Ignore the first line output as it’s historical

procs memory page disk faults cpu

r b w swap free re mf pi po fr de sr s0 s1 s6 s3 in sy cs us sy id

0 0 83 4456 456 1 431 266 70 167 0 35 6 6 0 2 523 567 31 14 9 76

0 0 62 3588464 46824 0 196 64 0 0 0 0 5 4 0 0 606 9743 882 86 7 7

0 0 62 3587960 42672 1 552 41 1 1 0 0 2 2 0 0 789 5488 1040 84 7 9

0 1 62 3584704 38848 0 471 3 38 38 0 0 5 5 0 1 1426 5270 968 64 9 27

0 0 62 3586464 38456 0 451 0 0 0 0 0 2 2 0 0 929 6039 1265 70 6 24

Also make sure that cpu/us is at least double cpu/sy





mpstat 5
(or iostat -c 5 )

CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl

0 221 3 544 227 75 582 61 31 28 7 267 18 12 2 68

2 209 2 446 395 178 328 37 31 32 6 299 11 7 2 80

-look at the wt field, this is wait-for-I/O which can be network or disk I/O, and should not be more than 30-40 (percent)



To see individual disk performance and find slow ('hot') disks :

iostat -d 5

sd0 sd1 sd6 sd37

kps tps serv kps tps serv kps tps serv kps tps serv

123 6 44 123 6 42 0 0 42 66 2 8

33 1 3 37 1 1 0 0 0 3 0 5

-check the serv column for each disk: this is the disk service time inmilliseconds.



However iostat by default shows only the first four disks: if you have more use -x and/or -l options :

iostat -xdnl 7 5



(e.g. if you have seven harddisks)

extended device statistics



r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device

0.4 1.6 3.2 70.4 0.0 0.0 0.0 2.7 0 1 c0t0d0

0.2 1.6 1.6 70.4 0.0 0.0 0.0 3.0 0 1 c0t1d0

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c0t6d0

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t8d0

88.8 19.8 1276.8 158.4 0.0 0.6 0.0 5.7 0 51 c1t9d0

0.0 0.4 0.0 100.8 0.0 0.0 0.0 17.4 0 1 c1t10d0

0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c1t11d0

10 Aralık 2010 Cuma

Controlling core files (Linux)

Core files get created when a program misbehaves due to a bug, or a violation of the cpu or memory protection mechanisms. The operating system kills the program and creates the core file.



If you don't want core files at all, set "ulimit -c 0" in your startup files. That's the default on many systems; in /etc/profile you may find




ulimit -S -c 0 > /dev/null 2>&1 


If you DO want core files, you need to reset that in your own .bash_profile:



ulimit -c 50000 




would allow core files but limit them to 50,000 bytes.



You have more control of core files in /proc/sys/kernel/



For example, you can do eliminate the tagged on pid by



echo "0" > /proc/sys/kernel/core_uses_pid 
Core files will then just be named "core". People do things like that so that a user can choose to put a non-writable file named "core" in directories where they don't want to generate core dumps. That could be a directory (mkdir core) or a file (touch core;chmod 000 core). I've seen it suggested that a symlink named core would redirect the dump to wherever it pointed, but I found that didn't work.



But perhaps more interesting is that you can do:



mkdir /tmp/corefiles 
chmod 777 /tmp/corefiles
echo "/tmp/corefiles/core" > /proc/sys/kernel/core_pattern


All corefiles then get tossed to /tmp/corefiles (don't change core_uses_pid if you do this).



Test this with a simple script:



# script that dumps core 
kill -s SIGSEGV $$


But wait, there's more (if your kernel is new enough). From "man proc":



/proc/sys/kernel/core_pattern 
This file (new in Linux 2.5) provides finer control over the
form of a core filename than the obsolete
/proc/sys/kernel/core_uses_pid file described below. The name
for a core file is controlled by defining a template in
/proc/sys/kernel/core_pattern. The template can contain %
specifiers which are substituted by the following values when
a core file is created:


%% A single % character
%p PID of dumped process
%u real UID of dumped process
%g real GID of dumped process
%s number of signal causing dump
%t time of dump (secs since 0:00h, 1 Jan 1970)
%h hostname (same as the 'nodename'
returned by uname(2))
%e executable filename


A single % at the end of the template is dropped from the core
filename, as is the combination of a % followed by any character
other than those listed above. All other characters in the
template become a literal part of the core filename. The maximum
size of the resulting core filename is 64 bytes. The default
value in this file is "core". For backward compatibility, if
/proc/sys/kernel/core_pattern does not include "%p" and
/proc/sys/kernel/core_uses_pid is non-zero, then .PID will be
appended to the core filename.


If you are running a Linux kernel that doesn't support this, you'll get no core files at all, which is also what happens if the directory in core_pattern doesn't exist or isn't writable by the user dumping core. So that's yet another way to not dump core for certain users: set core_pattern to a directory that they can't write to, and give write permission to the users who you do want to create core files.





Use this to find core files and remove them:

find . | egrep "\/core\.[0-9]+$" | xargs rm -f

This works well as it finds only core files. 

2 Aralık 2010 Perşembe

Add a Persistent Static Route in Redhat Enterprise Linux

Example:

/etc/sysconfig/network-scripts/route-eth0

/etc/sysconfig/network-scripts/route-eth1


GATEWAY0=192.168.1.1

NETMASK0=255.255.255.0

ADDRESS0=10.10.10.0




GATEWAY1=192.168.1.1

NETMASK1=255.255.255.0

ADDRESS1=20.20.20.2

26 Kasım 2010 Cuma

How to disable IPv6 on Red Hat

Edit /etc/sysconfig/network and change

NETWORKING_IPV6=yes to

NETWORKING_IPV6=no

Edit /etc/modprobe.conf and add these lines (if they’re not in it):

alias net-pf-10 off

alias ipv6 off


Stop the ipv6tables service by typing:

service ip6tables stop

Disable the ipv6tables service by typing:

chkconfig ip6tables off

After these changes, IPv6 will be disabled after the next reboot of your system.

13 Kasım 2010 Cumartesi

3 Kasım 2010 Çarşamba