Saturday, December 8, 2007

javascript removeChild crash IE

A web page load a gwt specific javascript file (entry point).

This javascript insert a javascript (A) dynamically to invoke JSON service.
The callback of the JSON will delete the dynamically inserted javascript (A) for house cleaning.

Above "Inserting A and deleting A" will be repeated each 5 seconds.

To removeScript, following code will crash IE.
var x =document.getElementsByTagName("script");
for (var i=0;i var y=x[i];
var src = y.getAttribute("src");
if(src!=null && src.indexOf("callback")>0){
var head = document.getElementsByTagName('head').item(0);
head.removeChild(y);
y.removeNode();
return;
}
}


However, following code will work:

setTimeout(function() {
var x =document.getElementsByTagName("script");
for (var i=0;i var y=x[i];
var src = y.getAttribute("src");
if(src!=null && src.indexOf("callback")>0){
var head = document.getElementsByTagName('head').item(0);
head.removeChild(y);
y.removeNode();
return;
}
}
}, 0);

Tuesday, November 20, 2007

Howto: Setup a DNS server with bind

Step1: Install bind 9
Step2: vi /etc/bind/named.conf.local
# This is the zone definition. replace example.com with your domain name
zone "example.com" {
type master;
file "/etc/bind/zones/example.com.db";
};

# This is the zone definition for reverse DNS. replace 0.168.192 with your network address in reverse notation - e.g my network address is 192.168.0
zone "0.168.192.in-addr.arpa" {
type master;
file "/etc/bind/zones/rev.0.168.192.in-addr.arpa";
};
And named.conf.local to named.conf
Step3:vi /etc/bind/named.conf.options
forwarders {
# Replace the address below with the address of your provider's DNS server
123.123.123.123;
};
Step4:mkdir /etc/bind/zones
Step5:vi /etc/bind/zones/example.com.db
// replace example.com with your domain name. do not forget the . after the domain name!
// Also, replace ns1 with the name of your DNS server
example.com. IN SOA ns1.example.com. admin.example.com. (
// Do not modify the following lines!
2006081401
28800
3600
604800
38400
)

// Replace the following line as necessary:
// ns1 = DNS Server name
// mta = mail server name
// example.com = domain name
example.com. IN NS ns1.example.com.
example.com. IN MX 10 mta.example.com.

// Replace the IP address with the right IP addresses.
www IN A 192.168.0.2
mta IN A 192.168.0.3
ns1 IN A 192.168.0.1
Step6: vi /etc/bind/zones/rev.0.168.192.in-addr.arpa
@ IN SOA ns1.example.com. admin.example.com. (
2006081401;
28800;
604800;
604800;
86400
)

IN NS ns1.example.com.
1 IN PTR example.com
Step7: /etc/init.d/bind9 restart
Step8: change /etc/resolv.conf
search example.com
nameserver 192.168.0.1

Friday, November 2, 2007

centos 5 iptables configure

$yum install system-config-securitylevel
$system-config-securitylevel

Tuesday, October 30, 2007

asterisk 1.4 on centos 5 "inode diet"

/*
* As part of the "inode diet" the private data member of struct inode
* has changed in 2.6.19. However, Fedore Core 6 adopted this change
* a bit earlier (2.6.18). If you use such a kernel, Change the
* following test from 2,6,19 to 2,6,18.
*/
#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18)
#define I_PRIVATE(inode) ((inode)->u.generic_ip)
#else
#define I_PRIVATE(inode) ((inode)->i_private)
#endif

Friday, October 19, 2007

Remote UNIX connection(disconnected) and asterisk

On CLI which is under verbose output, following message will be generated every 5 seconds.

It is due to safe_asterisk process, which itself is invoked by asterisk service, that kept connecting!

> > -- Remote UNIX connection
> > -- Remote UNIX connection disconnected
> > -- Remote UNIX connection
> > -- Remote UNIX connection disconnected
> > -- Remote UNIX connection
> > -- Remote UNIX connection disconnected
> > -- Remote UNIX connection
> > -- Remote UNIX connection disconnected

Wednesday, October 10, 2007

list bios information on linux

$dmidecode

dmidecode is a tool for dumping a computer’s DMI (some say SMBIOS) table contents in a human-readable format

pci device profile

On linux,

$lspci -tv show the tree map of the pci device

Monday, October 8, 2007

Simple Linux network traffic monitor

vnStat is a network traffic monitor for Linux that keeps a log of daily network traffic for the selected interface(s). vnStat isn't a packet sniffer. The traffic information is analyzed from the /proc -filesystem, so vnStat can be used without root permissions.

$vnstat -tr

Friday, August 17, 2007

Resolving Maven Dependency Conflicts

Resolving Dependency Conflicts:

Using maven, it is inevitable that two or more artifacts will require different versions of a particular dependency.
To manually resolve conflicts, following steps give very useful information:
(1) build the top level deployment by using:
$mvn -o -X test
this command will list all dependence information.
(2)grep the output of step (1) and remove the [DEBUG] at the beginning of each line
(3)sort the output of step (2), then you can find out easily the version conflicts.

Saturday, August 4, 2007

how to automate postgresql maintainence

Assume that postgresql is installed in /usr/local/pgsql directory.

$crontab -e
22 3 * * * su - postgres -c "/usr/local/pgsql/bin/vacuumdb -a -f" >> /var/log/vacuumdb.log 2>&1
23 3 * * * su - postgres -c "/usr/local/pgsql/bin/pg_dump -Fc -b -f /tmp/db.sql db" > /var/log/dump.log 2>&1

and then save like under vi editor.

$crontab -l
to list the task

Friday, August 3, 2007

How to Change the Timezone in Linux

1. Logged in as root, check which timezone your machine is currently using by executing `date`. You'll see something like "Mon 17 Jan 2005 12:15:08 PM PST -0.461203 seconds", PST in this case is the current timezone.
2. Change to the directory /usr/share/zoneinfo, here you will find a list of time zone regions. Choose the most appropriate region, if you live in Canada or the US this directory is the "Americas" directory.
3. If you wish, backup the previous timezone configuration by copying it to a different location. Such as `mv /etc/localtime /etc/localtime-old`.
4. Create a symbolic link from the appropiate timezone to /etc/localtime. Example: `ln -s /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime`.
5. If you have the utility rdate, update the current system time by executing `/usr/bin/rdate -s time.nist.gov`.
6. Set the ZONE entry in the file /etc/sysconfig/clock file (e.g. "America/Los_Angeles")
7. Set the hardware clock by executing: `/sbin/hwclock --systohc`

How to resolve permission issues when you move a database between servers that are running SQL Server

MORE INFORMATION
When you move a database from one server that is running SQL Server to another server that is running SQL Server, a mismatch may occur between the security identification numbers (SIDs) of the logins in the master database and the users in the user database. By default, SQL Server 7.0, SQL Server 2000, and SQL Server 2005 provide the sp_change_users_login system stored procedure to map these mismatched users. However, you can only use the sp_change_users_login stored procedure to map standard SQL Server logins and you must perform these mapping for one user at a time. For more information about the sp_change_users_login stored procedure, see the "sp_change_users_login" topic in SQL Server 7.0,SQL Server 2000, and SQL Server 2005 Books Online.

Solution:

Under a specific database name, remove the user. But if there are schemas defined by the user/owner, you cannot delete the user from the security menu (studio). So, you need to change the owner to dbo first. And then you can delete the owner. then recreate the owner/user again. Finally, change back the schema ownership

Wednesday, July 25, 2007

how to explicit values to be inserted into the identity column of a table.

Two conditions are needed:

1) set identity insert to ON:

SET IDENTITY_INSERT db1.dbo.table1 ON

2) specify column list, for example:
INSERT INTO
DB1.dbo.TABLE (column1, column2, column3)
SELECT column1, column2, column3
FROM dbo.TABLE1
dbo.TABLE2 ON dbo.TABLE1.column = dbo.TABLE2.column

Friday, July 6, 2007

optimize postgresql

Optimizing Postgresql
Ericson Smith
Following Tim Perdue's excellent article on the comparison between MySQL and Postgresql, I decided to take a shot at installing and using this database. For most of our work I use MySQL and will continue to do so, because of its ease of use and unrivaled select query speed, and also because there is no point in trying to mess around with production systems that already work fine.
But some new projects suffered greatly from MySQL's table locking feature when I needed to update data (which I do a lot). Here are my adventures in setting up a Postgresql database server.
Our configuration for a dedicated Postgresql server was:

* Redhat 7.1
* Dual PIII 650Mhz System
* 512MB RAM
* 18Gig SCSI drive for the postgresql data partition

Downloading and Installing
I downloaded and installed the 7.1.2 RPM's from http://postgres.org without any trouble. For a server installation, I only installed: postgresql-server and postgresql-7.1.2 (base).
I then started the server up and running by executing:
/etc/init.d/postgresql start
A small sized database was ported from MySQL (three tables totaling about 5000 records). I created sufficient indexes for postgresql's optimizer to use, and modified our C application to use the postgresql C client interface for a small CGI program that would brutally query this table. This small CGI program receives thousands of queries per minute.
Optimizing
One of the first things I noticed after turning on the CGI program, was that although queries were returned almost as fast as from the previous MySQL based system, the load on the server was much higher -- in fact almost 90-percent! Then I started to go down into the nitty-gritty of things. I had optimized MySQL before by greatly increasing cache and buffer sizes and by throwing more ram towards the problem.
The single biggest thing that you have to do before running Postgresql, is to provide enough shared buffer space. Let me repeat: provide enough buffer space! Let's say you have about 512MB of ram on a dedicated database server, then you need to turn over about 75-percent of it to this shared buffer. Postgresql does best when it can load most or -- even better -- all of a table into its shared memory space. In our case, since our database was fairly small, I decided to allocate 128MB of RAM towards the shared buffer space.
The file /var/lib/pgsql/data/postgresql.conf contains settings for the database server. Postgresql uses system shared memory as a buffer. On a Linux system, you can see how much shared memory was allocated by your system by running the command:
cat /proc/sys/kernel/shmmax
And to view shared memory use on the system:
ipcs
The result will be in bytes. By default RedHat 7.1 allocates 32MB of shared memory, hardly enough for postgresql. I increased this limit to 128MB by doing the command:
echo 128000000 > /proc/sys/kernel/shmmax
Be aware that once you reboot the server, this setting will disappear. You need to place this line in your postgresql startup file, or by editing the /etc/sysctl.conf file for a more permanent setting.
Then in our postgresql.conf I set shared_buffers to 15200. Because Postgresql uses 8K segments, I made a calculation of 128000/8192 plus a 512K overhead. I also set our sort_mem to 32168 (32Megs for a sort memory area). Since connection pooling was in effect, I set max_connections to 64. And fsync was also set to false.

shared_buffers = 15200
sort_mem = 32168
max_connections=64
fsync=false

You can read the manual to tweak other settings, but I never had the need to do so. Note that if you set shared_buffers to more than what your shared memory limit is, postgresql will refuse to start. This confused us for a while, since no logging was taking place. You can tweak the startup file in /etc/init.d for the postmaster to write its output to a log file. Change the fragment from:

/postmaster start > /dev/null 2>

to

/postmaster start > /var/lib/pgsql.log 2>

(or wherever you want to store the log.)
Tailing the log file clearly explained what the problem was.
All sorts of sexy debugging info will show up in this file, which includes SQL syntax errors, the output of EXPLAIN state, emts, connection problems, authentication attempts, and so forth.
I restarted postgresql and brought our CGI online. Our jaws collectively dropped to the floor as postgresql literally flew as soon as it started to use the buffer. Server load by postgresql dropped to just under 10-percent.
One hitch I found with an early version of the system was that it had to build up and tear down a postgresql connection with each request. This was intolerable, so I started to use the connection pooling features of the C library. Server load dropped another few notches with this option. With PHP you will want to use persistent connections (pg_pconnect instead of pg_connect) to fully take advantage of this effect.
Indexes
I cannot emphasize enough the need to have proper indexing in postgresql. One early mistake that I made was to index BIGINT columns. The columns were indexed ok, but postgresql refused to make use them. After two days of tearing out my hair, it came to me that the architecture of the system was 32 bits. Could it be that postgresql refuses to make use of a 64 bit (BIGINT) index? Changing the type to INTEGER quickly solved that problem. Maybe if I had one of those new-fangled 64 bit Itanium processors.
Conclusion
There are many things that you can do with your SQL statements to also improve query response, but these are adequately covered in the interactive postgresql documentation.

optimize postgresql

optimize postgresql server by three things:

(1)change the max number of file descriptors to 10,000 per process
adding:
* soft nofile 10240
* hard nofile 20000

into /etc/security/limits.conf file
(2)change linux kernel shared memory to 400M.
adding: kernel.shmmax=400000000 into file /etc/sysctl.conf file and reboot the server

(3)change postgresql.conf file to modify the shared buffer space to 300M.

max connection still keep to 100. It should be enough.

Friday, June 29, 2007

sync openvz vps zone with hardware node

I am using Singapore time.

Login into vps,

and run

#ln -sf /etc/localtime /usr/share/zoneinfo/Asia/Singapore

Thursday, June 28, 2007

java outofmemory in linux

Debian is a stable easy to maintain platform for Java Web Applications. However if you start using some caching frameworks, you may need to do some JVM tuning and here's the information you need to know.

Read this and to the point where you understand that a 32-process on Linux only has 2GB of addressible space. In fact, back in 1999, Linus decreed that 32-bit Linux would never, ever support more than 2GB of memory. "This is not negotiable."

Stacks are memory chunks used for saving variables and passing parameters when calling functions. It's a low level implementation detail, but the important thing to know is that each thread in each process must have it's own stack. Big stacks add up quickly as in the case of tomcat and most web servers where many threads are used to serve requests. Stacks for threads all come out of the same address space, so if you only used memory for stacks of threads you could have 1024 threads with 2MB stacks. In fact in Debian Sarge, there is no way to reduce the amount of memory allocated for the stack of a thread. [1] [2] [3]. Understand that this is not java specific, the same limitation applies to c programs.

Now that we have some fundamental stuff down, it's easy from here. Say you have 2G of Memory and want to use as much memory as possible for your cash for performance considerations. The objects in the cash will be stored in the heap memory. I think you should calculate how much memory you should use with the following formula.

HeapMemory = ProcessAdressSpace - ThreadMemory - PermGen - JVM

If you're running 32-bit, you can only see 2G max.

ProcessAdressSpace = 2G

If you web app is like mine, you'll need 100 threads. Have some librarys that may create threads, tack on another 100 to be safe. 200 threads X 2Mb/Thread = 400M

ThreadMemory = 400M

PermGen is where your defined classes go. Since you use alot of frameworks and have lots of classes, you should set this to 128M

PermGen = 128M

JVM is relatively small, but you need to give it room so that it can work quickly.

JVM = 256M

With those parameters, HeapMemory should not exceed 1264M! Anything more and you're going to slow down your application more than the cache is speeding it up, or you'll introduce nasty a OutOfMemoryException that will drive you crazy. Here's the parameters you want to use.

java -Xmx1264M -XX:MaxPermSize=128m ...

If you upgrade to etch, and set the stack size to 1MB (not recommending without extensive testing), then you can reclaim another 200MB for the heap.

Saturday, June 23, 2007

safe_asterisk

#!/bin/bash

ulimit -c unlimited

run_asterisk()
{
while :; do
cd /tmp
/usr/sbin/asterisk -c >& /dev/null < /dev/null
echo "Automatically restarting Asterisk."
sleep 4
done
}

run_asterisk &

Monday, June 18, 2007

Tuning file descriptor limits on Linux

Tuning file descriptor limits on Linux

Linux limits the number of file descriptors that any one process may open; the default limits are 1024 per process. These limits can prevent optimum performance of both benchmarking clients (such as httperf and apachebench) and of the web servers themselves (Apache is not affected, since it uses a process per connection, but single process web servers such as Zeus use a file descriptor per connection, and so can easily fall foul of the default limit).

The open file limit is one of the limits that can be tuned with the ulimit command. The command ulimit -aS displays the current limit, and ulimit -aH displays the hard limit (above which the limit cannot be increased without tuning kernel parameters in /proc).

The following is an example of the output of ulimit -aH. You can see that the current shell (and its children) is restricted to 1024 open file descriptors.

core file size (blocks) unlimited
data seg size (kbytes) unlimited
file size (blocks) unlimited
max locked memory (kbytes) unlimited
max memory size (kbytes) unlimited
open files 1024
pipe size (512 bytes) 8
stack size (kbytes) unlimited
cpu time (seconds) unlimited
max user processes 4094
virtual memory (kbytes) unlimited

Increasing the file descriptor limit

The file descriptor limit can be increased using the following procedure:

1. Edit /etc/security/limits.conf and add the lines:

* soft nofile 1024
* hard nofile 65535

2. Edit /etc/pam.d/login, adding the line:

session required /lib/security/pam_limits.so

3. The system file descriptor limit is set in /proc/sys/fs/file-max. The following command will increase the limit to 65535:

echo 65535 > /proc/sys/fs/file-max

4. You should then be able to increase the file descriptor limits using:

ulimit -n unlimited

The above command will set the limits to the hard limit specified in /etc/security/limits.conf.


or---
Increase the limit from 64 (default) to 2048 by issuing the command:

ulimit -n 2048

Also, some applications like the SCO JDK require that the virtual memory resource limit be set to unlimited.

ulimit -v unlimited


Note that you may need to log out and back in again before the changes take effect.

Thursday, June 7, 2007

Tuning PostgreSQL for performance

Tuning PostgreSQL for performance

Shridhar Daithankar, Josh Berkus

July 3, 2003 Copyright 2003 Shridhar Daithankar and Josh Berkus.
Authorized for re-distribution only under the PostgreSQL license (see www.postgresql.org/license).

Table of Contents

1 Introduction
2 Some basic parameters
2.1 Shared buffers
2.2 Sort memory
2.3 Effective Cache Size
2.4 Fsync and the WAL files
3 Some less known parameters
3.1 random_ page_cost
3.2 Vacuum_ mem
3.3 max_fsm_pages
3.4 max fsm_ relations
3.5 wal_buffers
4 Other tips
4.1 Check your file system
4.2 Try the Auto Vacuum daemon
4.3 Try FreeBSD
5 The CONF Setting Guide

1 Introduction
This is a quick start guide for tuning PostgreSQL's settings for performance. This assumes minimal familiarity with PostgreSQL administration. In particular, one should know,

* How to start and stop the postmaster service
* How to tune OS parameters
* How to test the changes

It also assumes that you have gone through the PostgreSQL administration manual before starting, and to have set up your PostgreSQL server with at least the default configuration.

There are two important things for any performance optimization:

* Decide what level of performance you want

If you don't know your expected level of performance, you will end up chasing a carrot always couple of meters ahead of you. The performance tuning measures give diminishing returns after a certain threshold. If you don't set this threshold beforehand, you will end up spending lot of time for minuscule gains.

* Know your load

This document focuses entirely tuning postgresql.conf best for your existing setup. This is not the end of performance tuning. After using this document to extract the maximum reasonable performance from your hardware, you should start optimizing your application for efficient data access, which is beyond the scope of this article.

Please also note that the tuning advices described here are hints. You should not implement them all blindly. Tune one parameter at a time and test its impact and decide whether or not you need more tuning. Testing and benchmarking is an integral part of database tuning.

Tuning the software settings explored in this article is only about one-third of database performance tuning, but it's a good start since you can experiment with some basic setting changes in an afternoon, whereas some other aspects of tuning can be very time-consuming. The other two-thirds of database application tuning are:

* Hardware Selection and Setup

Databases are very bound to your system's I/O (disk) access and memory usage. As such, selection and configuration of disks, RAID arrays, RAM, operating system, and competition for these resources will have a profound effect on how fast your database is. We hope to have a later article covering this topic.

* Efficient Application Design

Your application also needs to be designed to access data efficiently, though careful query writing, planned and tested indexing, good connection management, and avoiding performance pitfalls particular to your version of PostgreSQL. Expect another guide someday helping with this, but really it takes several large books and years of experience to get it right ... or just a lot of time on the mailing lists.

2 Some basic parameters
2.1 Shared buffers
Shared buffers defines a block of memory that PostgreSQL will use to hold requests that are awaiting attention from the kernel buffer and CPU. The default value is quite low for any real world workload and need to be beefed up. However, unlike databases like Oracle, more is not always better. There is a threshold above which increasing this value can hurt performance.

This is the area of memory PostgreSQL actually uses to perform work. It should be sufficient enough to handle load on database server. Otherwise PostgreSQL will start pushing data to file and it will hurt the performance overall. Hence this is the most important setting one needs to tune up.

This value should be set based on the dataset size which the database server is supposed to handle at peak loads and on your available RAM (keep in mind that RAM used by other applications on the server is not available). We recommend following rule of thumb for this parameter:

* Start at 4MB (512) for a workstation
* Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096)
* Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768)

PLEASE NOTE. PostgreSQL counts a lot on the OS to cache data files and hence does not bother with duplicating its file caching effort. The shared buffers parameter assumes that OS is going to cache a lot of files and hence it is generally very low compared with system RAM. Even for a dataset in excess of 20GB, a setting of 128MB may be too much, if you have only 1GB RAM and an aggressive-at-caching OS like Linux.

There is one way to decide what is best for you. Set a high value of this parameter and run the database for typical usage. Watch usage of shared memory using ipcs or similar tools. A recommended figure would be between 1.2 to 2 times peak shared memory usage.

2.2 Sort memory
This parameter sets maximum limit on memory that a database connection can use to perform sorts. If your queries have order-by or group-by clauses that require sorting large data set, increasing this parameter would help. But beware: this parameter is per sort, per connection. Think twice before setting this parameter too high on any database with many users. A recommended approach is to set this parameter per connection as and when required; that is, low for most simple queries and higher for large, complex queries and data dumps.

2.3 Effective Cache Size
This parameter allows PostgreSQL to make best possible use of RAM available on your server. It tells PostgreSQL the size of OS data cache. So that PostgreSQL can draw different execution plan based on that data.

Say there is 1.5GB RAM in your machine, shared buffers are set to 32MB and effective cache size is set to 800MB. So if a query needs 700MB of data set, PostgreSQL would estimate that all the data required should be available in memory and would opt for more aggressive plan in terms of optimization, involving heavier index usage and merge joins. But if effective cache is set to only 200MB, the query planner is liable to opt for the more I/O efficient sequential scan.

While setting this parameter size, leave room for other applications running on the server machine. The objective is to set this value at the highest amount of RAM which will be available to PostgreSQL all the time.

2.4 Fsync and the WAL files
This parameters sets whether or not write data to disk as soon as it is committed, which is done through Write Ahead Logging (WAL). If you trust your hardware, your power company, and your battery power supply enough, you set this to No for an immediate boost to data write speed. But be very aware that any unexpected database shutdown will force you to restore the database from your last backup.

If that's not an option for you, you can still have the protection of WAL and better performance. Simply move your WAL files, using either a mount or a symlink to the pg_xlog directory, to a separate disk or array from your main database files. In high-write-activity databases, WAL should have its own disk or array to ensure continuous high-speed access. Very large RAID arrays and SAN/NAS devices frequently handle this for you through their internal management systems.
3 Some less known parameters
3.1 random_page_cost
This parameter sets the cost to fetch a random tuple from the database, which influences the planner's choice of index vs. table scan. This is set to a high value as the default default based on the expectation of slow disk access. If you have reasonably fast disks like SCSI or RAID, you can lower the cost to 2. You need to experiment to find out what works best for your setup by running a variety of queries and comparing execution times.
3.2 Vacuum_mem
This parameter sets the memory allocated to Vacuum. Normally, vacuum is a disk intensive process, but raising this parameter will speed it up by allowing PostgreSQL to copy larger blocks into memory. Just don't set it so high it takes significant memory away from normal database operation. Things between 16-32MB should be good enough for most setups.
3.3 max_fsm_pages
PostgreSQL records free space in each of its data pages. This information is useful for vacuum to find out how many and which pages to look for when it frees up the space.

If you have a database that does lots of updates and deletes, that is going to generate dead tuples, due to PostgreSQL's MVCC system. The space occupied by dead tuples can be freed with vacuum, unless there is more wasted space than is covered by the Free Space Map, in which case the much less convenient "vacuum full" is required. By expanding the FSM to cover all of those dead tuples, you might never again need to run vacuum full except on holidays.

The best way to set max _ fsm _ pages is interactive; First, figure out the vacuum (regular) frequency of your database based on write activity; next, run the database under normal production load, and run "vacuum verbose analyze" instead of vacuum, saving the output to a file; finally, calculate the maximum total number of pages reclaimed between vacuums based on the output, and use that.

Remember, this is a database cluster wide setting. So bump it up enough to cover all databases in your database cluster. Also, each FSM page uses 6 bytes of RAM for administrative overhead, so increasing FSM substantially on systems low on RAM may be counter-productive.
3.4 max _ fsm _ relations
This setting dictates how many number of relations (tables) will be tracked in free space map. Again this is a database cluster-wide setting, so set it accordingly. In version 7.3.3 and later, this parameter should be set correctly as a default. In older versions, bump it up to 300-1000.
3.5 wal_buffers
This setting decides the number of buffers WAL(Write ahead Log) can have. If your database has many write transactions, setting this value bit higher than default could result better usage of disk space. Experiment and decide. A good start would be around 32-64 corresponding to 256-512K memory.
4 Other tips
4.1 Check your file system
On OS like Linux, which offers multiple file systems, one should be careful about choosing the right one from a performance point of view. There is no agreement between PostgreSQL users about which one is best.

Contrary to popular belief, today's journaling file systems are not necessarily slower compared to non-journaling ones. Ext2 can be faster on some setups but the recovery issues generally make its use prohibitive. Different people have reported widely different experiences with the speed of Ext3, ReiserFS, and XFS; quite possibly this kind of benchmark depends on a combination of file system, disk/array configuration, OS version, and database table size and distribution. As such, you may be better off sticking with the file system best supported by your distribution, such as ReiserFS for SuSE Linux or Ext3 for Red Hat Linux, not to forget XFS known for it's large file support . Of course, if you have time to run comprehensive benchmarks, we would be interested in seeing the results!

As an easy performance boost with no downside, make sure the file system on which your database is kept is mounted "noatime", which turns off the access time bookkeeping.
4.2 Try the Auto Vacuum daemon
There is a little known module in PostgreSQL contrib directory called as pgavd. It works in conjunction with statistics collector. It periodically connects to a database and checks if it has done enough operations since the last check. If yes, it will vacuum the database.

Essentially it will vacuum the database when it needs it. It would get rid of playing with cron settings for vacuum frequency. It should result in better database performance by eliminating overdue vacuum issues.
4.3 Try FreeBSD
Large updates, deletes, and vacuum in PostgreSQL are very disk intensive processes. In particular, since vacuum gobbles up IO bandwidth, the rest of the database activities could be affected adversely when vacuuming very large tables.

OS's from the BSD family, such as FreeBSD, dynamically alter the IO priority of a process. So if you lower the priority of a vacuum process, it should not chew as much bandwidth and will better allow the database to perform normally. Of course this means that vacuum could take longer, which would be problematic for a "vacuum full."

If you are not done with your choice of OS for your server platform, consider BSD for this reason.

5 The CONF Setting Guide
Available here is an Annotated Guide to the PostgreSQL configuration file settings, in both OpenOffice.org and PDF format. This guide expands on the official documentation and may eventually be incorporated into it.

* The first column of the chart is the GUC setting in the postgresql.conf file.
* The second is the maximum range of the variable; note that the maximum range is often much larger than the practical range. For example, random_page_cost will accept any number between 0 and several billion, but all practical numbers are between 1 and 5.
* The third column contains an enumeration of RAM or disk space used by each unit of the parameter.
* The fourth column indicates whether or not the variable may be SET from the PSQL terminal during an interactive setting. Most settings marked as "no" may only be changed by restarting PostgreSQL.
* The fifth column quotes the official documentation available from the PostgreSQL web site.
* The last column is our notes on the setting, how to set it, resources it uses, etc. You'll notice some blank spaces, and should be warned as well that there is still strong disagreement on the value of many settings.

Users of PostgreSQL 7.3 and earlier will notice that the order of the parameters in this guide do not match the order of the parameters in your postgresql.conf file. This is because this document was generated as part of an effort to re-organize the conf parameters and documentation; starting with 7.4, this document, the official documentation, and the postgresql.conf file are all in the same logical order.

As noted in the worksheet, it covers PostgreSQL versions 7.3 and 7.4. If you are using an earlier version, you will not have access to all of these settings, and defaults and effects of some settings will be different.

Tuesday, June 5, 2007

iax2 one-way audio

After a while of operation, IAX becomes behaving incorrectly, no audio or 1-way, and no-answer :

This problem happened on asterisk 2.1.14.

The solution is configure the
jitterbuffer=no

Thursday, May 31, 2007

GWT-app does not first show when loading (IE 6)

Modules can contain references to external JavaScript and CSS files, causing them to be automatically loaded when the module itself is loaded.

But when injection of CSS file make the IE does not show the GWT widgits until you click on the page manually.

To work around this, donot use css injection but using normal way of embed the css file in html template

Monday, May 28, 2007

jboss ejb3 and eager loading

In case of using annotation @OneToMany(mappedBy="order",cascade=CascadeType.ALL, fetch=FetchType.EAGER), the ear deployment broken.

To work around this problem, remove the fetch=FetchType.EAGER

or

Use either: Set or List with an explicit @IndexColumn, rather than a bag semantic collection like Collection.

@IndexColumn is an hibernate annotation

Friday, May 18, 2007

Solaris x86

Hardware:

To setup the server testing enviroment, Sun utral 20 box is used. This box has two harddisks: (1)/dev/dsk/c1d0 and (2)/dev/dsk/c2d0

During installation of the Solaris 10, the first harddisk is used to install the boot, swap and root partitions. Basically, which likes the normal linux setting up.

The second harddisk /dev/dsk/c2d0 is left for the "zpool", although zpool can also use the extra space in the first disk.

After OS installation, the system status is : Solaris OS is installed in the first hardisk (/dev/dsk/c1d0). And the second harddisk is totally not used.

--------------------------After the installation of OS-------------------------
(1)create resource pool by command zpool
#zpool create spool c2d0
(2)now the pool named "spool" is created. Which can be listed by:
#zpool list
(3)Creating a ZFS File System by allocating space from the pool "spool"
#zfs create spool/z1fs
(4)now, new ZFS file system is created, which can be listed by:
#zfs list
(5)Now, it is time to create the parent directory of all zones: In the following, /zones directory is created.
#mkdir /zones
(6)To create zone "z1", i create directory /zones/z1,
#mkdir /zones/z1
(7)and mount the newly create ZFS spool/z1fs to /zone/z1
#zfs set mountpoint=/zones/z1 spool/z1fs
(8)and set the quato for ZFS spool/z1fs
#zfs set quota=10G spool/z1fs
(9) It is time to create zone z1 now:
#zonecfg -z z1
z1: No such zone configured
Use 'create' to begin configuring a new zone
zonecfg:myzone< create
zonecfg:myzone< set zonepath=/zones/z1
zonecfg:myzone< verify
zonecfg:myzone< commit
zonecfg:myzone< exit
(10)Install the zone by using the zoneadm
#chmod 700 /zones/z1
#zoneadm -z z1 install
(11)Boot the zone to complete the installation, using the zoneadm command.
# zoneadm -z z1 boot
(12)Use the zlogin command to connect to the zone console and answer the initialization questions:
#zlogin -C z1
(13)to shutdown zone
#zlogin z1 init 5

----------------------------------------------------

Configure network for the zone

Context: the Sun utral 20 box has only one physical network interface. To make the zone created network accessable, one logical interface will be assigned to each zone.

(1)list the physical network interface on Sun utral 20 box:
#ifconfig -a
The result shows that the physical network interface is nge0
(2)create logical interface and assign it to zone z1 and then up the logical interface:
#ifconfig nge0:1 plumb 192.168.1.89 netmask 255.255.255.0 zone z1 up
(3)login into z1:
#zlogin z1
(4)modify /etc/ssh/sshd_config , so that "root" can be used to do remote login
(5)restart sshd:
#svcadm restart ssh

------------------------------------------------------------------

DNS clients configuration:

After zone is running, it could encount such a problem: When you do ping to other computer, the naming resolving does not work. This is due to the DNS client configuration is not correct.

All DNS clients require the presence of the /etc/nsswitch.conf and /etc/resolv.conf files. Note that the DNS server must also be configured as a DNS client if it intends to use its own DNS services.

The /etc/nsswitch.conf file specifies the resolver library routines to be used for resolving host names and addresses. Modify the /etc/nsswitch.conf file by editing the hosts entry and adding the dns keyword. To ensure proper network interface configuration during the boot process, make sure that the files keyword is listed first. The following example shows a hosts entry configured for DNS:

hosts: files dns

The /etc/resolv.conf file specifies the name servers that the client must use, the client's domain name, and the search path to use for queries.

; resolv.conf file for DNS clients of the one.edu domain
domain office1.abc.net
nameserver 192.168.1.119
search office1.abc.net

Observe that the search keyword specifies domain names to append to queries that were not specified in the FQDN format. The first domain listed following the search keyword designates the client's domain. If both "domain" and "search" keywords are present, then the last one in the file is used and the other one(s) are ignored.

The nameserver keyword specifies the IP address of the DNS servers to query. Do not specify host names. You can use up to three nameserver keywords to increase your chances of finding a responsive server. In general, list the name servers that are nearer to the local network first. The client attempts to use the loopback address if there is no nameserver keyword or if the /etc/resolv.conf file does not exists.

------------------------------------------------------------------------

Configure DHCP client

After crashing of sun ultra 20 box, a dell box with onboard networking card is used to install solaris.

Problem 1: solaris doesn't support this on-board networking card.
Solution: a standalone PCI network card is used.

Problem 2: Solaris cannot detect this PCI card, and what is the interface name of this card?
Solution: Google suggest to try names like le0, iprb0, elxl0 and rtls0 etc.

Problem3: how to configure the DHCP client?
Solution: for a network interface, to configure it DHCP, create two empty files under /etc directory:
(a)hostname.INTERFACENAME , (b)dhcp.INTERFACENAME

After trying google suggested interface names, it is found that elxl0 is correct for this card.

--------------------------------------------------------------------------

Configure static IP addresses for the solaris box:

Network Interface Cards are what allow your system to talk to the network. When they don't work, neither do you. I will cover how to configure, troubleshoot, and modify your interfaces. I will not be covering routing issues, that will follow in the next article. My goal here is to get your interface up and properly running.

The first place to start is installing and testing the hardware. Once you have installed the hardware, SPARC systems can be tested at the EPROM level to verify the network interface cards. Use the manual that accompanies the interface card on how to test that specific card. Solaris x86 is a little different, as there is no true EPROM, and the drivers are different. However, Solaris x86 2.6 is Plug and Play compatible, and I have had fairly good luck adding network interface cards.

Once you have confirmed at the hardware and driver level that everything works, the fun can begin. The place to start is the ifconfig command. This powerful command allows you configure and modify your interfaces in real time. However, any modifications made with ifconfig are not permanent. When the system reboots, it will default to its previous configuration. I will first show you how to make all modifications with the ifconfig command. The second half of this article will cover making these modifications permanent by modifying the proper configuration files.
ifconfig

ifconfig -a

will show you which interfaces are currently installed and active. Remember, just because you added the physical network interface card does NOT mean it is active. If you do an ifconfig before you have configured the device, the interface will not show up. Once configured however, the typical output of the ifconfig -a command would look like this:

lo0: flags=849 mtu 8232
inet 127.0.0.1 netmask ff000000
elxl0: flags=863 mtu 1500
inet 192.168.1.132 netmask ffffff00 broadcast 192.168.1.255
ether 8:0:20:9c:6b:2d

Here we see two interfaces, lo0 and elxl0. lo0 is the standard loopback interface found on all systems. elxl0 is a 10/100 Mbps interface. All hme interfaces are 10/100 Mbps, all le interfaces are 10 Mbps, all qe interface are quad 10 Mbps, and qfe interfaces are quad 10/100 Mbps. There are three lines of information about the interface. The first line is about the TCP/IP stack. For the interface elxl0, we see the system is up, running both broadcast and multicast, with a mtu (maximum transfer unit) of 1500 bytes, standard for an Ethernet LAN. Notrailers is a flag no longer used, but kept for backwards compatibility reasons.

The second line is about the IP addressing. Here we see the IP address, netmask in hexadecimal format, and the broadcast address. The third line is the MAC address. Unlike most interfaces, Sun Microsystems's interfaces derive the MAC addressing from the NVRAM, not the interface itself. Thus, all the interfaces on a single SPARC system will have the same MAC address. This does not cause a problem in routing, since most NICs are always on a different network. Note, you must be root to see the MAC address with the ifconfig command, any other user will only see the first two lines of information.

The first step in bringing up an interface is "plumbing" the interface. By plumbing, we are implementing the TCP/IP stack. We will use the above interface, elxl0, as an example. Lets say we had just physically added this network interface card and rebooted, now what? First, we plumb the device with the plumb command.

ifconfig elxl0 plumb

This sets up the streams needed for TCP/IP to use the device. However, the stack has not been configured as you can see below.

elxl0: flags=842 mtu 1500
inet 0.0.0.0 netmask 0
ether 8:0:20:9c:6b:2d

The next step is to configure the TCP/IP stack. We configure the stack by adding the IP address, netmask, and then telling the device it is up. All this can be down in one command, as seen below.

ifconfig elxl0 192.168.1.132 netmask 255.255.255.0 up

This single command configures the entire device. Notice the up command, which initializes the interface. The interface can be in one of two states, up or down. When an interface is down, the system does not attempt to transmit messages through that interface. A down interface will still show with the ifconfig command, however it will not have the word "up" on the first line.
Virtual interfaces

Before moving on to the configuration files, I would first like to cover virtual interfaces. A virtual interface is one or more logical interfaces assigned to an already existing interface. Solaris can have up to 255 virtual interfaces assigned to a single interface.

Once again, lets take the interface elxl0 as an example. We have already covered how to configure this device. However, lets say the device is on a VLAN (virtual LAN) with several networks sharing the same wire. We can configure the device elxl0 to answer to another IP address, say 172.20.15.4. To do so, the command would be the same as used for elxl0, except the virtual interface is called elxl0:*, where * is the number you assign to the virtual interface. For example, virtual interface one would be elxl0:1. The command to configure it looks as follows.

ifconfig elxl0:1 172.20.15.4 netmask 255.255.0.0 up

Once you have configured the virtual interface, you can compare elxl0 and elxl0:1 with the ifconfig command.

elxl0: flags=843 mtu 1500
inet 192.168.1.132 netmask ffffff00 broadcast 192.168.1.255
ether 8:0:20:9c:6b:2d
elxl0:1: flags=842 mtu 1500
inet 172.20.15.4 netmask ffff0000 broadcast 172.20.255.255

Here you see the two devices, both of which are on the same physical device. Notice how the virtual interface elxl0:1 has no MAC address, as this is the same device as elxl0. We can repeat this process all the way up to elxl0:255. The operating system and most applications will treat these virtual devices as totally independent devices.

Note, Matthew A. Domurat has identified a "bug" with Solaris 2.6. When working with virtual interfaces, Solaris 2.6 will randomly select one of the interfaces as its source address for every packet sent. These are the patches to fix this:

* 105786-05: SunOS 5.6: /kernel/drv/ip patch
* 105787-04: SunOS 5.6_x86: /kernel/drv/ip patch

Configuration files

Now you know how to configure your network interface cards. Unfortunately, any modifications, additions, or deletions you make with ifconfig are only temporary, you will lose these configurations when you reboot. I will now discuss what files you have to configure to make these changes permanent.

The place to start is the file /etc/hostname.*, where * is the name of the interface. In the case of elxl0, the file name is /etc/hostname.elxl0. The virtual interface elxl0:1 would have the file name /etc/hostname.elxl0:1. This file has a single entry, the name of the interface. This name is used in the /etc/hosts file to resolve name to IP address.

The file /etc/hostname.* is critical, this is what causes the device to be plumbed. During the boot process, the /etc/rcS.d/S30network.sh file reads all the /etc/hostname.* files and plumbs the devices. Once plumbed, the devices are configured by reading the /etc/hosts and the /etc/netmasks file. By reading these two files, the device is configured for the proper IP and netmask, and brought to an up state. Lets take the device elxl0 as an example. During the boot process, /etc/rcS.d/S30network.sh looks for any /etc/hostname.* files. It finds /etc/hostname.elxl0, which contains the following entry.

homer

/etc/rcS.d/S30network.sh looks in /etc/hosts and resolves the name homer with an IP address of 192.168.1.132. The device elxl0 is now assigned this IP address. The script then looks at /etc/netmasks to find the netmask for that IP address. With this information, the startup script brings up interface elxl0 with an IP address of 192.168.1.132 and a netmask of 255.255.255.0. It may seem redundant having the script review the netmask of a class C address. However, do not forget that, starting with 2.6, Solaris supports both classless routing and VLSM (Variable Length Subnet Masks), both of which I will discuss in my next article.

As you have seen in this example, there are three files that must be modified for every interface. The first is /etc/hostname.*, this is the file you create to designate the interface's name. The second file is /etc/hosts, here you resolve the IP to the interface name. Last is /etc/netmasks, this is where you define the netmask of the IP address.

--------------------------------------------------------------
Install Solaris OS and partition on Dell Box

The Dell Box has one IDE harddisk with about 120G capacity. Solaris identifies this disk to be c0d0.

During installation, fdisk partition it as
c0d0s0 which mount on /
c0d0s1 ----swap
c0d0s7 ----/export/home
c0d0s3,4,5,6 all not used but with 20G space. they will be used as the resources of the resource pool created by zpool.


----------------------------------------------------

create harddisk resource pool

(1)#zpool create spool c0d0s3
(2)#zpool add spool c0d0s4
(4)#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
spool 39G 196K 39.0G 0% ONLINE

-----------------------------------------------------------

set static IP for global zone:

refer to the doc on above comments....

To set static IP for global zone:

(1)create /etc/hosts file. so that:
#less hosts
#
# Internet host table
#
192.168.1.23 benchmark1
(2)create /etc/hostname.elxl0 so that:
#less hostname.elxl0
benchmark1

After these two files created, global zone are network enabled.

-----------------------------------------------------------------
setup static ip for zone benchmark2 (non-global zone)

After login into global zone,

if doing #ifconfig -a,
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
elxl0: flags=1000843 mtu 1500 index 2
inet 192.168.1.23 netmask ffffff00 broadcast 192.168.1.255
ether 0:1:2:12:67:53

#zonecfg -z benchmark2

=>add net
==>set ip=192.168.1.23
==>set physical=elxl0
...
and commit

After boot zone benchmark2, Now, do #ifconfig -a again:
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
lo0:1: flags=2001000849 mtu 8232 index 1
zone benchmark2
inet 127.0.0.1 netmask ff000000
elxl0: flags=1000843 mtu 1500 index 2
inet 192.168.1.23 netmask ffffff00 broadcast 192.168.1.255
ether 0:1:2:12:67:53
elxl0:1: flags=1000843 mtu 1500 index 2
zone benchmark2
inet 192.168.1.22 netmask ffffff00 broadcast 192.168.1.255

It indicates that non-global zone is already network enabled.

-----------------------------------------------

JBOSS setup

Copy jboss from zpeter4:

Including:
(1)/opt/jboss-4.0.4.GA
(2)/var/jboss
(3)/opt/bin/myip (this is used in jboss's run.sh file)
(4)if java security manager is included in the java parameter, make sure the corresponding policy file also there

---------------------------------------------------


JBOSS user

(1)user jboss must be created
(2)group jboss must be created
(3)home directory of user jboss must be /var/jboss

[ Show » ]
Peter Yang [16/May/07 12:27 PM] JBOSS user (1)user jboss must be created (2)group jboss must be created (3)home directory of user jboss must be /var/jboss

----------------------------------------------------------

Setup SMF for jboss:

(1)export smf script from zpeter4
#svccfg export jboss >/tmp/jboss.xml

and copy this exported file to new zone benchmark2.

(2)Copy /opt/lib/svc/method/svc-jboss from zpeter4 to benchmark2 zone in the corresponding directory

(3)import manifect file jboss.xml in benchmark2 zone.
/usr/sbin/svccfg -v import /tmp/jboss.xml

(4)enable jboss
#svcadm enable jboss

(5)monitoring the status of jboss service
#svcs -xv jboss

------------------------------------------------

how to remove service or edit manifest file

svcadm -v disable svc:/network/jboss
svccfg -v delete svc:/network/jboss
svccfg -v import /var/svc/manifest/jboss.xml

----------------------------------------------------


How to run NFS file server and mount NFS client

Problem:

For the development enviroment, there are four JBOSS servers. To enable the web log feature of the portal, it is required for all these JBOSS server to send their log files to a central NFS server, on which jbossportal server is running.

Parameters:
(1)JBOSS servers:
zpeter2, zpeter3, zpeter4 and benchmark2 (dell computer)

(2)NFS file server is running on storm, on which zroland3 zone is running. JBossPortal is running with zroland3 zone.

Steps:
(1)edit /etc/dfs/dfstab on storm to include following lines:
share -F nfs -o rw -d "zpeter2:/var/jboss/log" /zones/zroland3/root/nfsd/zpeter2
share -F nfs -o rw -d "zpeter3:/var/jboss/log" /zones/zroland3/root/nfsd/zpeter3
share -F nfs -o rw -d "zpeter4:/var/jboss/log" /zones/zroland3/root/nfsd/zpeter4
share -F nfs -o rw -d "benchmark2:/var/jboss/log" /zones/zroland3/root/nfsd/benchmark2
(2)make sure the mode of /zones/zroland3/root/nfsd/zpeter2, /zones/zroland3/root/nfsd/zpeter3, /zones/zroland3/root/nfsd/zpeter3 and /zones/zroland3/root/nfs/benchmark2 are 777. So that files can be written in these directories.

(3)restart NFS server

(4)on zone zpeter2, edit /etc/vfstab to include line:
storm:/zones/zroland3/root/nfsd/zpeter2 - /log nfs - yes rw
and do the same thing for zones zpeter3, zpeter4 and benchmark2

(5)restart these zones

Java VM Tuning - GC selection

GC collector selection:

(1)default collector(serial collector).
(2)throughput collector,which try to maximum CPU usage (minimize the CPU
usage by GC). But it could introduce big pause. So, this collector
cannot be used for RTS game server.
(3)Concurrent Low Pause Collector. On uniprocessor, this collector will
have worse performance than the default GC collector (serial collector).
On Two CPU machine, this collector will not pause the running
application, but it consume too much percentage of CPU. But for more CPU
(like 3 or 4), this collector is better than the default collector.

Currently, it is assumed that 2 processors machine will be used.
So, the default GC will be used.

The two main factors for JVM optimization are :

(1)Heap Size (-Xms and -Xmx)
(2)The second most influential knob is the proportion of the heap
dedicated to the young generation. By default, the young generation size
is controlled by NewRatio. For example, setting -XX:NewRatio=3 means
that the ratio between the young and tenured generation is 1:3. In other
words, the combined size of the eden and survivor spaces will be one
fourth of the total heap size. The bigger this ratio, the less pause.
However, more frequently the GC runs. This can be logged by another
command line options: -verbose:gc -XX:+PrintGCDetails.

If this ratio set to 15, you will see the GC runs very with less pause
each time.

Have no idea about the default value of this ratio.

Total Heap Size:

Since collections occur when generations fill up, throughput is
inversely proportional to the amount of memory available. Total
available memory is the most important factor affecting garbage
collection performance.

Unless having problems with pauses, try granting as much memory as
possible to the virtual machine. The default size (64MB) is often too
small.

To make sure the pause is not too big, adjust the NewRatio accordingly.

Friday, April 13, 2007

T1/E1 jumper on TE11xP, TE2xxP, and TE4xxP

These cards have a physical jumper for selecting whether the ports are T1 (open) or E1 (closed).
But there is also a software override for the jumper in the wct4xxp and wct11xp drivers. (See http://kb.digium.com/entry/1/121/)

To set all spans to E1 mode, use:

insmod wct4xxp t1e1override=0xFF

To set all spans to T1 mode, use:

insmod wct4xxp t1e1override=0x00

An even easier way is to add this to your /etc/modprobe.d/zaptel file:

options wct4xxp t1e1override=0xFF

The argument is a bitmask, which can be used to set each span separately, if that is needed for some reason. Span 1 is 0x01, span 2 is 0x02, span 3 is 0x04, and span 4 is 0x08. For example, "t1e1override=0x0B" would set spans 1, 2, and 4 to E1 mode, and leave span 3 in T1 mode.

custom log4j appender for jboss

it is very interesting to customize a log4j append, through which the jboss can send log to google talk IM.

To deploying the gtalklogger(written by myself) and its dependent jar files, you cannot put them simplily into jboss../server/default/lib directory.

Instead, you need to create a directory to Set the boot patch directory, which Must be absolute or url.

./run.sh -d absolute-path-to-the-boot-batch-directory


refer to the output of:
$./run.sh --help

Thursday, April 12, 2007

TE412P and centos 4.4

i am struggling with the TE412P and cent os 4.4.

(1)libpri-1.2.2
(2)zaptel-1.2.3
(3)zaptel-1.2.3

These packages are a bit old because i am trying to duplicate an old system.


when i run /sbin/ztcfg, i always got the following error:
-----------------------
Notice: Configuration file is /etc/zaptel.conf
line 0: Unable to open master device '/dev/zap/ctl'

1 error(s) detected
-----------------------

I know it is related to udev of linux kernel 2.6.* . And also i read README.udev in zaptel-1.2.3 directory.

Unless i run following modprobe manually, the /dev/zap/.... files get created. But these files should be created automatically when rebooting (udev system's function)

$modprobe zaptel
$modprobe wct4xxp

Only after above manually loaded module, the os will create the /dev/zap/xxxx .

.... BAD experience.....

Saturday, April 7, 2007

asterisk <---IAX2---> asterisk

i have two asterisk Boxes:

mobile users<--ISDN/PRI->BOX A<--IAX2-->BOX B.

(1)CALLERID(dnid) issue
When mobile users dial DDI number assigned to BOX A, the CALLERID(dnid) is available. But when Box A forward the call to BOX B, the CALLERID(dnid) is empty within Box B dialplan.

To work around this problem, i make an agreement between Box A and Box B. When Box A dial Box B, it specifies the IAX2/peer/extensionnumber, where extension number is the CALLER(dnid) of BOX A. And in dialplan of Box B, add Set(CALLERID(dnid)=${EXTEN}) to reset the CALLERID(dnid).

(2)CALLERID(num) issue:
To send the caller ID to box B, sendani=yes, need to be configured in the box A peer configuration

(3)Native transfer
To control the media path, notransfer=yes, need to be configured in the Box B.

Thursday, April 5, 2007

IP traffic monitoring

I want to monitor the network usage between two servers. After researching, i recommended following two tools:

(1)ntop
(2)iptraf

Sunday, April 1, 2007

Java and OpenVZ VPS

After installation of openvz VPS by following its user guide, i cannot even run $java -version within the VPS.

After discovering, i found that the default VPS configuration cannot be used to run java application.

(1)Example, VPS 101 is running the vps.basic configuration profile. I run the following command: $vzcalc 101 and the result show that my hardware can support about 80 VPS. But if these VPSes are used to run java application, it is obviously not correct.

(2)So, i need to create/calcualte the profile myself. Fortunately, OpenVZ has command :
$vzsplit -n 10 -f vps.java
This means that i want to create 10 VPSes for java application in the hardware node.

(3)Finally, i apply this new configuration profile to VPS 101.
$vzctl set 101 --applyconfig vps.java --save

(4)Login into VPS 101 again. It is very happy to run java.

Saturday, March 31, 2007

update OpenVZ VPS

After searching, i found that there are no information about how to update a open VZ VPS. It is worth to write it down for other people's reference.

Why do i need to update VPS? Because i need to compile code in VPS. With the prepackaged VPS template, it does not include glibc-devel, gcc etc development tools.

I have CentOS 4.4 CDs in hand. So, following is what i did.

My first VPS ID is 101.

At the very begining, i tried: root$vzyum 101 update. It just failed. I think it could be due to the slow internet connection or the yum repository is not available....

So, i man yum and find that it could be update or install local packages. So i did the following: (cd current directory to the CD mounted directory)


$vzyum 101 update glibc-common-2.3.4-2.25.i386.rpm
$vzyum 101 install gcc

It works.

Friday, March 30, 2007

cd ripper

Cdparanoia is a Compact Disc Digital Audio (CDDA) extraction tool, commonly known on the net as a 'ripper'. The application is built on top of the Paranoia library, which is doing the real work (the Paranoia source is included in the cdparanoia source distribution). Like the original cdda2wav, cdparanoia package reads audio from the CDROM directly as data, with no analog step between, and writes the data to a file or pipe in WAV, AIFC or raw 16 bit linear PCM.

It only works on Linux.

Wednesday, March 28, 2007

connect openoffice to postgresql database

OpenOffice is useful to view remote database server.

First, create a database role to only view the remote database:
(1)#create role peter login;
(2)#alter role peter with password '12345';
(3)#grant select on table tablename to peter;
(4)modify pg_hba.conf file so that it includes line: (this line must be positioned so that it is the first to match for the connection request).
host dbname peter2 192.168.1.0/24 password
(5)reload the configuration
(6)test with psql -U peter -h hostip -W dbname

Second, OpenOffice configuration:
(1)Make sure Java (http://java.sun.com) is installed in the computer
(2)Download & Install OpenOffice 2.x.
(3)Download jdbc driver for PostgreSQL. (http://jdbc.postgresql.org)
(4)Run OpenOffice. Navigate to: Tools-->Options-->Java. Select Java and set the CLASSPATH so that the CLASSPATH include the downloaded JDBC driver file. And ok.
(5)Navigate: File-->New-->Database. Select :Connecting to an existing database by JDBC. And use jdbc:postgresql://databasehostip:5432/databasename as databaseURL and org.postgresql.Driver as JDBC driver class. And then type your database username and select "Password required".

That is all.