[MySQL] 部署MySQL在Solaris的zone上

以前是把MySQL cluster部署在物理節點上,我今天把MySQL部署在solaris的zone中
1.創建zone
由於我僅僅是想驗證功能 ,因此我在一台機器上面建立了5個zone
1個zone做管理節點
2個zone做MySQL API
2個zone做數據節點
先建立zone1,然後克隆其它四個zone
mkdir /export/home/zones/zone1
mkdir /export/home/zones/zone2
mkdir /export/home/zones/zone3
mkdir /export/home/zones/zone4
mkdir /export/home/zones/zone5
chmod 700 /export/home/zones/zone1
chmod 700 /export/home/zones/zone2
chmod 700 /export/home/zones/zone3
chmod 700 /export/home/zones/zone4
chmod 700 /export/home/zones/zone5
zonecfg -z zone1
zonecfg:zone1> set zonepath=/export/home/zones/zone1
zonecfg:zone1> set autoboot=true
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=e100g0
zonecfg:zone1:net> set address=10.0.0.1/24
zonecfg:zone1>info
zonecfg:zone1>verify
zonecfg:zone1> commit
zonecfg:zone1>exit
安裝zone1
zoneadm -z zone1 install
zoneadm -z zone1 boot
zlogin -C zone1
然後對zone1進行一系列的配置即可
然後在zone1中安裝MySQL
1)下載webstack1.4 http://www.opensolaris.org/os/project/webstack/
2)解壓並安裝
例如,
gunzip webstack-native-1.4-b06-solaris-i586.tar.gz
tar -xvf webstack-native-1.4-b06-solaris-i586.tar
pkgadd -d ./ sun-mysql50.pkg
3)vi /etc/my.cnf
mysqld
port=3306
socket=/tmp/mysql.sock
basedir=/opt/webstack/mysql
datadir=/disk2/zone1
4)chown -R mysql:mysql /opt/webstack/mysql
/opt/webstack/mysql/bin/mysql_install_db
chown -R mysql:mysql /disk2/zone1
/opt/webstack/mysql/bin/mysqld_safe &
檢查mysql是否正常啟動
ps -ef | grep mysql
克隆zone2,zone3,zone4,zone5
(1).停止作為克隆模版的zone(例如,zone1)
#zoneadm -z zone1 halt
(2).把zone1的配置導入文件中
mkdir /test
cd /test
mkfile -nv 100M file1
zonecfg -z zone1 export -f /test/file1
(3).編輯 /test/file1
#vi /test/file1
zonepath=/export/home/zones/zone2
set pool=pool2
add net
set address=10.0.0.2/24
set physical=e1000g0
end
根 據實際自己調整配置,(如果可以給每個網口(e1000g0,e1000g1,e1000g2,e1000g3)都提供一個IP的話,可以為不同的 zone綁定不 同的物理網口,給zone一個單獨的物理網口可以為zone提供較大的網絡帶寬,提高網絡的能力),只需要保證其關鍵信息(zonepath,IP, special=/disk1/cache/zone1-cache等)不與別的zone重複即可。
(4).用/test/files中的命令創建zone2
zonecfg -z zone2 -f /test/file1
(5).用zone1的配置安裝zone2
zoneadm -z zone2 clone zone1
(6).zoneadm -z zone2 boot
(7).登陸zone2
zlogin -C zone2
同理,克隆其它的zone
然後在各個zone中編輯 /etc/hosts文件,把各個zone的IP地址放入該文件
10.0.0.1 zone1
10.0.0.2 zone2
10.0.0.3 zone3
10.0.0.4 zone4
10.0.0.5 zone5
然後根據實際情況調整/etc/my.cnf的配置參數,例如,datadir=/disk2/zone1
檢查是否均能正常啟動MySQL
2. 建立MySQL cluster
zone1(10.0.0.1)是管理節點
zone2(10.0.0.2),zone5(10.0.0.5)是SQL節點
zone3(10.0.0.3),zone4(10.0.0.4)是數據存儲節點
1)安裝管理節點(zone1)
  1. vi /etc/config.ini
    NDBD DEFAULT
    NoOfReplicas=2
    TCP DEFAULT
    portnumber=3306
    NDB_MGMD
    hostname=zone1
    datadir=/disk2/zone1
    NDBD
    hostname=zone3
    datadir=/disk2/zone3
    NDBD
    hostname=zone4
    datadir=/disk2/zone4
    MYSQLD
    hostname=zone2
    MYSQLD
    hostname=zone5
    2)配置SQL節點(zone2,zone5)
  2. vi /etc/my.cnf
    mysqld
    user=mysql
    port=3306
    socket=/tmp/mysql.sock
    basedir=/opt/webstack/mysql
    datadir=/disk2/zone2
    ndbcluster
    ndb-connectstring=zone1
    MYSQL_CLUSTER
    ndb-connectstring=zone1
    ndb-mgmd-host=zone1
    注意:在zone5上需要修改datadir
    3)配置存儲節點(NDB節點,zone3,zone4)
  3. vi /etc/my.cnf
    mysqld
    port=3306
    socket=/tmp/mysql.sock
    basedir=/opt/webstack/mysql
    datadir=/disk2/zone3
    Ndbcluster
    ndb-connectstring=zone1
    MYSQL_CLUSTER
    ndb-connectstring=zone1
    ndb-mgmd-host=zone1
    注意:在zone4上需要修改datadir
    4)啟動MySQL Cluster
    較為合理的啟動順序是,首先啟動管理節點服務器,然後啟動存儲節點服務器,最後才啟動SQL節點服務器:
在管理節點服務器上,執行以下命令啟動MGM節點進程:
  1. /opt/webstack/mysql/bin/ndb_mgmd -f /etc/config.ini
    必須用參數「f」或「-config-file」告訴 ndb_mgm 配置文件所在位置,默認是在ndb_mgmd相同目錄下。
在每台存儲節點服務器上,如果是第一次啟動ndbd進程的話,必須先執行以下命令:
  1. /opt/webstack/mysql/bin/ndbd --initial
    注意,僅應在首次啟動ndbd時,或在備份/恢復數據或配置文件發生變化後重啟ndbd時使用「--initial」參數。因為該參數會使節點刪除由早期ndbd實 例創建的、用於恢復的任何文件,包括用於恢復的日誌文件。
    如果不是第一次啟動,直接運行如下命令即可:
  2. /opt/webstack/mysql/bin/ndbd
最後,運行以下命令啟動SQL節點服務器:
#/opt/webstack/mysql/bin/mysqld_safe &
如果一切順利,也就是啟動過程中沒有任何錯誤信息出現,那麼就在管理節點服務器上運行如下命令:
  1. /opt/webstack/mysql/bin/ndb_mgm
    • NDB Cluster – Management Client –
      ndb_mgm> show
      Connected to Management Server at: localhost:1186
      Cluster Configuration
      ---------------------
      ndbd(NDB) 2 node(s)
      id=2 @10.0.0.3 (Version: 5.0.67, Nodegroup: 0, Master)
      id=3 @10.0.0.4 (Version: 5.0.67, Nodegroup: 0)
ndb_mgmd(MGM) 1 node(s)
id=1 @10.0.0.1 (Version: 5.0.67)
mysqld(API) 2 node(s)
id=4 @10.0.0.2 (Version: 5.0.67)
id=5 @10.0.0.5 (Version: 5.0.67)
具體的輸出內容可能會略有不同,這取決於你所使用的MySQL版本。
5)創建數據庫表
與沒有使用 Cluster的MySQL相比,在MySQL Cluster內操作數據的方式沒有太大的區別。執行這類操作時應記住兩點:
(1) 表必須用ENGINE=NDB或ENGINE=NDBCLUSTER選項創建,或用ALTER TABLE選項更改,以使用NDB Cluster存儲引擎在 Cluster內複製它們。如果使用mysqldump的輸出從已有數據庫導入表,可在文本編輯器中打開SQL腳本,並將該選項添加到任何表創建語句,或 用這類選項之一 替換任何已有的ENGINE(或TYPE)選項。
(2)另外還請記住,每個NDB表必須有一個主鍵。如果在創建表時用戶未定義主鍵,NDB Cluster存儲引擎將自動生成隱含的主鍵。(註釋:該隱含鍵也將佔用空間,就像任何其他的表索引一樣。由於沒有足夠的內存來容納這些自動創建的鍵,出現問題並不罕見 )。
下面是一個例子:
在zone2上,創建數據表,插入數據:
  1. mysql
    mysql> create database testdb;
    mysql> use testdb;
    mysql> create table city(
    mysql> id mediumint unsigned not null auto_increment primary key,
    mysql> name varchar(20) not null default ''
    mysql> ) engine = ndbcluster default charset utf8;
    mysql> insert into city values(1, 'city1');
    mysql> insert into city values(2, 'city2');
在zone5上,查詢數據:
  1. mysql
    mysql> create database testdb;
    mysql> select * from city;
    -----------
    idname
    -----------
    1city1
    -----------
    2city2
    -----------
檢查mysql cluster
停掉數據存儲節點zone3,在管理節點上可以看到zone4接替服務
ndb_mgm> show
Cluster Configuration
---------------------
ndbd(NDB) 2 node(s)
id=2 (not connected, accepting connect from zone3)
id=3 @10.0.0.4 (Version: 5.0.67, Nodegroup: 0, Master)
ndb_mgmd(MGM) 1 node(s)
id=1 @10.0.0.1 (Version: 5.0.67)
mysqld(API) 2 node(s)
id=4 @10.0.0.2 (Version: 5.0.67)
id=5 @10.0.0.5 (Version: 5.0.67)
這時我再停掉zone5,可以看到zone2還在服務
ndb_mgm> show
Cluster Configuration
---------------------
ndbd(NDB) 2 node(s)
id=2 (not connected, accepting connect from zone3)
id=3 @10.0.0.4 (Version: 5.0.67, Nodegroup: 0, Master)
ndb_mgmd(MGM) 1 node(s)
id=1 @10.0.0.1 (Version: 5.0.67)
mysqld(API) 2 node(s)
id=4 @10.0.0.2 (Version: 5.0.67)
id=5 (not connected, accepting connect from zone5)
因此,配置正確
6)安全關閉
要想關閉 Cluster,可在MGM節點所在的機器上,輸入下述命令:
  1. /opt/webstack/mysql/bin/ndb_mgm -e shutdown
    在SQL節點上運行以下命令關閉SQL節點的mysqld服務:
    #/opt/webstack/mysql/bin/mysqladmin -u root shutdown

[JAVA] RMI Server - Client implement


工作上遇到需於Windows 2003 Server上啟動一個Java Service, 這個服務主要是需於作業系統上產生一個常駐的Service來操作後段ClearCase及ClearQuest.
小Ken老師提供方法,就是使用RMI來實作這段.以下是在正式使用前對RMI的測試.

Environment:
=============================================
OS : Windows Vista Bussiness
DEV platform: Eclipse(3.4) + net.genady.rmi.feature_2.1.0.v20081001 (rmi的plug-in)

Implementation:
=============================================
Step 1:產生一個Service 的Interface 並extends javax.rmi.Remote, 程式碼如下:
package com.cenoq.rmi.common;

import java.rmi.Remote;
import java.rmi.RemoteException;

public interface PrintService extends Remote {

public String simpleRemoteMethod(String arg) throws RemoteException;

}

Step2:implements Step1的interface並extends java.rmi.server.UnicastRemoteObject, 程式碼如下:
package com.cenoq.rmi.server;

import java.rmi.RemoteException;
import java.rmi.registry.LocateRegistry;
import java.rmi.registry.Registry;
import java.rmi.server.UnicastRemoteObject;

import com.cenoq.rmi.common.PrintService;

public class PrintServiceImpl extends UnicastRemoteObject implements PrintService {

//這個constructor是一定要滴.
protected PrintServiceImpl() throws RemoteException {
super();
}


public String simpleRemoteMethod(String arg) throws RemoteException {
System.out.println("function called!!");
return "OK";
}

/**
* @param args
*/
public static void main(String[] args) {
try {
Registry r = LocateRegistry.getRegistry();
r.bind("PrintServer", new PrintServiceImpl());

} catch (Exception e) {
e.printStackTrace();
}
}
}

這個Service的implementaion需要指定codebase,可設定如下:
file:/C:/Users/boyce/workspace/RMIServerDemo/bin/ <-- p="" rintserviceimpl.java="">
file:/C:/Users/boyce/workspace/RMICommonDemo/bin/ <-- nterface="" p="" rintservice.java="">

Step3:產生Client,來呼叫rmi service
package com.cenoq.rmi.client;

import java.net.MalformedURLException;
import java.rmi.Naming;
import java.rmi.NotBoundException;
import java.rmi.RMISecurityManager;
import java.rmi.RemoteException;

import com.cenoq.rmi.common.PrintService;

public class RmiClient {

public static void main(String[] args) {
System.setSecurityManager(new RMISecurityManager());
try {

//注意PrintServer需與bind在rmi registry的名稱相同
PrintService ps = (PrintService) Naming.lookup("rmi://localhost/PrintServer");
String rtn = ps.simpleRemoteMethod("abc");
System.out.println(rtn);
} catch (MalformedURLException e) {
e.printStackTrace();
} catch (RemoteException e) {
e.printStackTrace();
} catch (NotBoundException e) {
e.printStackTrace();
}
}
}

Client另外需要一個(java.security.policy)這個Permission file來認証access user, file(seucrity.policy) 如下:
// This file was generated by the RMI Plugin for Eclipse.

///////////////////////////////////////////////////////////////
// This is a sample policy file that grants the application all permissions.
// A policy file is needed by the RMISecurityManager and your application might
// not work after installing the RMISecurityManager unless you provide a
// security policy file at launch.
//
// You can configure the security policy of a launched application using either
// the RMI Launcher or by manually setting the java.security.policy property.
//
// SECURITY NOTE: This security policy is good for development. For deployment
// you may need a stricter security policy.
//
// For more information see:
// http://java.sun.com/docs/books/tutorial/rmi/running.html
// http://java.sun.com/j2se/1.5.0/docs/guide/security/PolicyFiles.html
//

grant {
permission java.security.AllPermission;

// Other options:
// permission java.net.SocketPermission "127.0.0.1:1024-", "accept, connect, listen, resolve";
// permission java.net.SocketPermission "localhost:1024-", "accept, connect, listen, resolve";

// From http://java.sun.com/docs/books/tutorial/rmi/running.html
// Copyright 1995-2005 Sun Microsystems, Inc. Reprinted with permission

// permission java.net.SocketPermission "*:1024-65535", "connect,accept";
// permission java.net.SocketPermission "*:80", "connect";

// permission java.net.SocketPermission "*:1024-65535", "connect,accept";
// permission java.io.FilePermission "c:\\home\\ann\\public_html\\classes\\-", "read";
// permission java.io.FilePermission "c:\\home\\jones\\public_html\\classes\\-", "read";
};


[JAVA] WebService - 使用IIS撘配Tomcat

昨天跟Eric談了一下,覺得FileNet好像也很值得玩玩.FileNet是一個content management.也提供流程來符合企業的需要.

所以今天想到昨天與他談的架構(IIS + WAS + BPM),想說用IIS + Tomcat先來試試看.

Environment:

==================================================================

OS: Windows XP Pro

Http Server : IIS 5.1

Web Application Server : Tomcat 5.5.25


Preparation: (安裝我就不提了,請參照網站上的參考資料)

==========================================

1.download JDK 5

2.download tomcat

3.download isapi_redirect-1.2.27.dll

4.install IIS


configuration:

===============================================

1. Tomcat_Home/conf/workers.properties

worker.list=ajp13
worker.loadbalancer.type=lb
ps=\
worker.ajp13.port=8009
worker.ajp13.host=localhost
worker.ajp13.type=ajp13
worker.ajp13.lbfactor=1

2.Tomcat_Home/conf/uriworkermap.properties

/jsp-examples/*=ajp13

3. config windows registry (regedit)

3.1產生HKEY_LOCAL_MACHINE\SOFTWARE\Apache Software Foundation\Jakarta Isapi Redirector \1.0

3.2加入以下五組字串值

字串字串值描述
extension_uri/tomcat/isapi_redirect-1.2.27.dll指定訪問isapi_redirect.dll文件的uri,在IIS中將建立名為tomcat的虛擬目錄,目錄路徑內包括isapi_redirect.dll檔
log_filec:\tomcat5\logs\jk_iis.loglog file name
log_leveldebuglog level
worker_filec:\tomcat5\conf\workers.propertiesworker.properties
worker_mount_filec:\tomcat5\conf\uriworkermap.propertiesuriworkermap.properties

4.config IIS

a.於預設網站下新增一虛擬目錄名為tomcat,路徑指到與isapi_redirect.dll同一個位置,使用權限應為"指令碼與執行檔".

b.預設網站下點右鍵選內容,於ISAPI篩選器加入一新篩選器.將執行檔指向isapi_redirect.dll.

5.注意事項:

以上全設定完成後記得重開機,接著先停掉IIS,再啟動tomcat,最後再啟動IIS.就大功告成囉^^

[JAVA] Cluster for GlassFish

需要兩台服務器分別安裝glassfish。如果你只有一台服務器,那麼你可以利用Solaris的zone技術來虛擬兩台主機
下面以一個例子來說明如果建立Glassfish的Cluster
Server1(10.0.0.1);Server2(10.0.0.2)
1.分別在Server1和Server2上編輯/etc/hosts
#vi /etc/hosts
10.0.0.1 server1-hostname
10.0.0.2 server2-hostname
2.分別在兩台機器上安裝glassfish
(1)下載glassfish http://glassfish.dev.java.net/
(2)安裝glassfish
#java -Xmx256m -jar filename.jar
#cd glassfish
#ant -f setup-cluster.xml
3.在Server1上啟動glassfish domain1
#cd /glassfish/bin
#./asadmin
Use "exit" to exit and "help" for online help.
asadmin> start-domain

(然後你就可以在瀏覽器中輸入http://10.0.0.1:4848 可能到管理界面了)
asadmin> create-node-agent gf1
asadmin> start-node-agent gf1
Please enter the master password [Enter to accept the default]:>
Redirecting output to /glassfish/nodeagents/gf1/agent/logs/server.log
Redirecting application output to /glassfish/nodeagents/gf1/agent/logs/server.log
Command start-node-agent executed successfully.
4.在Server2上
#cd /glassfish/bin
#./asadmin
Use "exit" to exit and "help" for online help.
asadmin> start-node-agent --host 10.0.0.1 --port 4848 gf2
asadmin> start-node-agent gf2
5.在管理界面的左面欄裡你會在「node agents」下看到2個node:gf1和gf2.
6.在圖形界面上選擇「Clusters」,在右面欄選擇「new」。你需要為每個節點創建一個實例,然後選擇OK。等創建完成一個cluster後,選中clus ter前面的複選框,然後重新起動Clsuter,所有的服務應該能正常啟動了。

Glassfish的cluster機制,採用內存複製方法,把用戶的Session自動複製到另外一台機制上,既保證了高可用性,又降低了過多網絡複製帶來的負面 作用,提高了性能。

[Android] 分享 Android上的3.5G網路給筆電使用 (Ubuntu 8.10 x64)

1. 系統:

a. Ubuntu 8.10 X64

b. HTC Magic (1.5)


2. 安裝 :

a. 安裝Android SDK (Ubuntu上).

b. 安裝Java SDK (Ubuntu上).

c. 下載Openvpn (2.1), 並安裝於Ubuntu上.

d. 下載 azilink. http://code.google.com/p/azilink/ 這裡有兩個檔要下載 (azilink.apk 及 AziLink.ovpn).

其中 azilink.apk 可直接於手機上打開browser http://lfx.org/azilink/azilink.apk 來下載並安裝 (這裡要注意,因為不是market上的程式,所以要裝之前,要先於手機上勾選 "設定" -> "應用程式" -> "未知的來源"

而AziLink.ovpn則置於ubuntu下 Openvpn的資料夾內(第2步中解壓的位置)

e. 安裝Androdi的USB,請參考google上的設定 ( http://developer.android.com/guide/developing/device.html)


3. 組態 :

1) 於ubuntu , cd到 %Android_install_path%/tools/ ,執行 ./adb forward tcp:41927 tcp:41927 ,此動作主要是設定 port forwarding

2) 在手機上, 執行 AziLink 並且將 "Service active" 選起來.

3) 在Ubuntu上,設定一下nameserver 。 (sudo vi /etc/resolv.conf , 加入 nameserver 192.168.56.1這行, 存檔關閉).

4) 最後於Ubuntu上,cd到 %openvpn% , 執行 ./openvpn --config ./AziLink.ovpn.


往後只要需要分享時,依組態做一次就可以囉..............

[Android] 如何發佈自己開發的程式到手機上.....

1.先於Eclipse上對著要匯出的專案按右鍵,選Android tools,再選Export Unsigned Application Package.
2.開啟command line, cd到第一步Export APK的位置。
3.於command line下打 keytool -genkey -v -keystore android.keystore -alias android.keystore -keyalg RSA -validity 20000
4.於command line下打 jarsigner -verbose -keystore android.keystore -signedjar xxx_signed.apk xxx.apk android.keystore
5.將新gen出的.apk (xxx_signed.apk) 置於Apache Server下。
6.由手機開啟browser,連到該Http server.直接下載安裝。
7.如果出現不能安裝,有可能手機上的未知來源沒打開,請於手機上選擇"Menu"->"設定"->"應用程式"->"未知來源"沒有被選中。將其選中就可以順利安裝。

[Rational] cqload

cqload exportintegration:

[ -dbset name ] login password schema_name begin_rev end_rev [record_type_to_rename ] "schema_pathname"

  • Export only changes made in version five of the Enterprise schema.

    cqload exportintegration -dbset 2003.06.00 admin "" Enterprise 5 5 "" "c:\tmp\export.txt"

  • Export changes made in versions five through eight of the Enterprise schema.

    cqload exportintegration -dbset 2003.06.00 admin `' Enterprise 5 8 "" "c:\tmp\export.txt"

  • Export only changes made in version five of the Enterprise schema and specify that the record type ChangeRequest is to be renamed on import. You must specify a new name for the record in the new_record_type_name option of the subsequent cqload importintegration.

    cqload exportintegration -dbset 2003.06.00 admin "" Enterprise 5 5 ChangeRequest "c:\temp\scriptchanges.txt"

cqload importintegration:

[ -dbset name ] login password schema_name [new_record_type_name ] "integration name" integration_version "schema_pathname" "form_name"

  • Import the exported schema into the Testit schema, leaving all record types with their original names.

    cqload importintegration -dbset 2003.06.00 admin "" Testit "" Email_Integ 1 "c:\tmp\export.txt" ""

  • Import the exported schema into the Testit schema, renaming the record type specified in the record_type_to_rename option of the previouscqload exportintegration command to Defect.

    cqload importintegration -dbset 2003.06.00 admin "" Testit Defect Email_Integ 1 "c:\tmp\export.txt" ""

[Moodle] CentOS + MySQL + Apache + PHP + DDNS

包裝一個Moodle.

system requirement:

=======================================================

OS : CentOS 5.2 x86_64

HTTP Server : Apache2

Program Lang : PHP

DB : MySQL 5

Application : Moodle 1.9.3+

DNS : DDNS bind9


Installation:about:blank

=======================================================

1.於VMware安裝CentOS 5.2

2.安裝時順便安裝MySQL5, httpd, php5, gcc

3.下載Moodle 1.9.3+

4.安裝openldap 可參考官網


[SAP] PI 7.1 (Process Integration) overview

1.首先必需先瞭解SOA應用的概念 (for PI) :

2.PI主要四大應用:

2.1 SAP / non-SAP integration

2.2 A2A or B2B integration

2.3 同步及非同步訊息交換

2.4 BPM (組件與組件間的業務管理) : 各業務資料可能散落於各系統中,使用PI接口將所有資訊集中在一起,接著利用PI定義的組合及轉化方式,組成其它系統所需資訊,並向其傳遞.

3.整個SOA架構中NetWeaver以兩個主要的product組合出來: 一個是CE (Composition Environment ) ,一個是PI (Process Integration) ,其中CE整合了 processes, information 及 UI composites.

4.介紹PI一些特性:

4.1 ES Repository (Enterprise Service Repository) : 包含了 Design time ES Repository and the UDDI Services Registry.

4.2 7.1後對於performance有強化,特別是在High-Volumn message ,這是指著說可在同一次的呼叫中去打包很多的message一次傳送,

4.3 對於權限,採用公開的SAML標準,可將user的priciple and credential 在兩系統間傳播(credential propagation). -- X.509 cetificates.

4.4 支援XML驗證

4.5 依據Web Services Standard Web Services Reliable Messaging (WS-RM)可支援非同步化訊息.

4.6 BPM:

4.6.1 強化Process Engine的效能 - 透過 message packaging, process queuing, transactional handling.

4.6.2 WS-BPEL 2.0 preview

4.6.3 Further enhancements: modeling enhancements such as ,e.g., step groups, BAM patterns; configurable parameters; embedded alert management (alert categories

within the BPEL process definition; human interaction (generic user decision), task and workflow services for S2H scenarios (aligned with BPEL4People)
4.7 NetWeaver PI 7.1提供了很多新的functions,而這些functions都依照JEE5為base來建構.

4.8 The process integration capabilities within SAP NetWeaver offer the most common Enterprise Service Bus (ESB) components like
4.8.1 Communication infrastructure (messaging and connectivity)
4.8.2 Request routing and version resolution
4.8.3 Transformation and mapping
4.8.4 Service orchestration
4.8.5 Process and transaction management
4.8.6 Security
4.8.7 Quality of service
4.8.8 Services registry and metadata management
4.8.9 Monitoring and management
4.8.10 Support of Standards (WS RM, WS Security, SAML, BPEL, UDDI, etc.)
4.8.11 Distributed deployment and execution
4.8.x ESB的元件並沒有被SAP包裝成一個product,而是讓它們成為一些functionalty的集合,這樣可以讓使用者隨他們想要的功能來包裝各自的ESB.

4.9 WS-Policy and WS-PolicyAttachment都有支援.

5.接下來各別為上述名詞介紹:

5.1 ES Repository and Registry:

ES Repository 和 Registry主要是統一管控的儲存庫.

5.1.1 ES Repository : 主要儲存process和services的定義,service的metadata及提供一個統一模組化及設計環境.

5.1.2 Services Registry : 它就像黃頁,內含註冊的services,也存有deployment information, services management.

5.1.3 Design time Repository : 有一個central modeling and design environment,並提供工具和編輯器讓你能go through process of service defition.

5.1.4 Usage scenrios:

[MySQL] 利用MySQL Proxy 來達成read/write分離

Enviornment:

1. OpenSolaris (當 replication 的master) : mysql(5.1) and mysql-proxy (0.6.1) -- 172.16.1.31

2. WinXP (當 replication 的slave) : mysql (5.1) -- 172.16.1.35


step 1 :先實作出 replication 的環境 .

1) on master :

a) edit my.ini :確認 [mysqld] 下有 log-bin=mysql-bin 和 server-id=1,如果使用innodb的話要再確認 innodb_flush_log_at_trx_commit=1sync_binlog=1 這兩個參數

另外, skip-network 不可開啟會導致 bin-log 複製失敗.

b) create user for replication, mysql > grant replication slave on *.* to 'repl'@'%' identified by '1234';

2) on slave , edit my.ini :

確認 [mysqld] 下有 server-id=2

3) 確認 master 上的 replication information :

a) 先鎖住table, 使用 MySQL > flush tables with read lock;

b) 再取得master上的資訊 MySQL > show master status; <---請記下 file 及 position

4) 將資料由master上dump出來.

a) 使用mysqldump, shell > mysqldump --all-databases --master-data >dbdump.db

b) 如果是MyISAM ,可直接copy data目錄下的 *.frm, *.myi, *.myd

5) 接下來於slave server上執行以下statment :

a) mysql > change master to

-> master_host='172.16.1.31',

-> master_user='repl', <--這是第一步建立給slave要replication的帳號

-> master_password='1234', <--這是第一步建立給slave要replication的密碼

-> master_log_file='mysql-bin.000020', <--這是第三步由master取得的information; file

-> master_log_pos=106; <--這是第三步由master取得的information; position

* * 如果出現 ERROR 1198 (HY000): This operation cannot be performed with a running slave; run STOP SLAVE first.

b) mysql > start slave;


step 2 : 使用MySQL Proxy

1. 安裝 :

a) 安裝需求: libevent 1.x or higher

lua 5.1.x or higher

glib2 2.6.0 or higher

pkg-config

MySQL 5.0.x or higher

MySQL Proxy 0.6.1

在opensolaris上安裝上述lib有很多麻煩,其中glib2可到官網上去下載自行編譯安裝,其餘的可透過opensolaris上的套件資料庫安裝.

[MySQL] - Zone上實作MySQL Cluster

以前是把MySQL cluster部署在物理節點上,我今天把MySQL部署在solaris的zone中
1.創建zone
由於我僅僅是想驗證功能 ,因此我在一台機器上面建立了5個zone
1個zone做管理節點
2個zone做MySQL API
2個zone做數據節點
先建立zone1,然後克隆其它四個zone
mkdir /export/home/zones/zone1
mkdir /export/home/zones/zone2
mkdir /export/home/zones/zone3
mkdir /export/home/zones/zone4
mkdir /export/home/zones/zone5
chmod 700 /export/home/zones/zone1
chmod 700 /export/home/zones/zone2
chmod 700 /export/home/zones/zone3
chmod 700 /export/home/zones/zone4
chmod 700 /export/home/zones/zone5
zonecfg –z zone1
zonecfg:zone1> set zonepath=/export/home/zones/zone1
zonecfg:zone1> set autoboot=true
zonecfg:zone1> add net
zonecfg:zone1:net> set physical=e100g0
zonecfg:zone1:net> set address=10.0.0.1/24
zonecfg:zone1>info
zonecfg:zone1>verify
zonecfg:zone1> commit
zonecfg:zone1>exit
安裝zone1
zoneadm –z zone1 install
zoneadm –z zone1 boot
zlogin –C zone1
然後對zone1進行一系列的配置即可

然後在zone1中安裝MySQL
1)下載webstack1.4
http://www.opensolaris.org/os/project/webstack/
2)解壓並安裝
例如,
gunzip webstack-native-1.4-b06-solaris-i586.tar.gz
tar –xvf webstack-native-1.4-b06-solaris-i586.tar
pkgadd –d ./ sun-mysql50.pkg
3)vi /etc/my.cnf
[mysqld]
port=3306
socket=/tmp/mysql.sock
basedir=/opt/webstack/mysql
datadir=/disk2/zone1
4)chown –R mysql:mysql /opt/webstack/mysql
/opt/webstack/mysql/bin/mysql_install_db
chown –R mysql:mysql /disk2/zone1
/opt/webstack/mysql/bin/mysqld_safe &

檢查mysql是否正常啟動
ps –ef | grep mysql

克隆zone2,zone3,zone4,zone5
(1).停止作為克隆模版的zone(例如,zone1)
#zoneadm -z zone1 halt
(2).把zone1的配置導入文件中
mkdir /test
cd /test
mkfile -nv 100M file1
zonecfg -z zone1 export -f /test/file1
(3).編輯 /test/file1
#vi /test/file1
zonepath=/export/home/zones/zone2
set pool=pool2
add net
set address=10.0.0.2/24
set physical=e1000g0
end

根 據實際自己調整配置,(如果可以給每個網口(e1000g0,e1000g1,e1000g2,e1000g3)都提供一個IP的話,可以為不同的 zone綁定不 同的物理網口,給zone一個單獨的物理網口可以為zone提供較大的網絡帶寬,提高網絡的能力),只需要保證其關鍵信息(zonepath,IP, special=/disk1/cache/zone1-cache等)不與別的zone重複即可。

(4).用/test/files中的命令創建zone2
zonecfg -z zone2 -f /test/file1
(5).用zone1的配置安裝zone2
zoneadm -z zone2 clone zone1
(6).zoneadm -z zone2 boot
(7).登陸zone2
zlogin -C zone2

同理,克隆其它的zone

然後在各個zone中編輯 /etc/hosts文件,把各個zone的IP地址放入該文件
10.0.0.1 zone1
10.0.0.2 zone2
10.0.0.3 zone3
10.0.0.4 zone4
10.0.0.5 zone5
然後根據實際情況調整/etc/my.cnf的配置參數,例如,datadir=/disk2/zone1
檢查是否均能正常啟動MySQL

2. 建立MySQL cluster
zone1(10.0.0.1)是管理節點
zone2(10.0.0.2),zone5(10.0.0.5)是SQL節點
zone3(10.0.0.3),zone4(10.0.0.4)是數據存儲節點
1)安裝管理節點(zone1)
# vi /etc/config.ini
[NDBD DEFAULT]
NoOfReplicas=2
[TCP DEFAULT]
portnumber=3306
[NDB_MGMD]
hostname=zone1
datadir=/disk2/zone1
[NDBD]
hostname=zone3
datadir=/disk2/zone3
[NDBD]
hostname=zone4
datadir=/disk2/zone4
[MYSQLD]
hostname=zone2
[MYSQLD]
hostname=zone5
2)配置SQL節點(zone2,zone5)
# vi /etc/my.cnf
[mysqld]
user=mysql
port=3306
socket=/tmp/mysql.sock
basedir=/opt/webstack/mysql
datadir=/disk2/zone2
ndbcluster
ndb-connectstring=zone1
[MYSQL_CLUSTER]
ndb-connectstring=zone1
ndb-mgmd-host=zone1
注意:在zone5上需要修改datadir
3)配置存儲節點(NDB節點,zone3,zone4)
# vi /etc/my.cnf
[mysqld]
port=3306
socket=/tmp/mysql.sock
basedir=/opt/webstack/mysql
datadir=/disk2/zone3
Ndbcluster
ndb-connectstring=zone1
[MYSQL_CLUSTER]
ndb-connectstring=zone1
ndb-mgmd-host=zone1
注意:在zone4上需要修改datadir
4)啟動MySQL Cluster
較為合理的啟動順序是,首先啟動管理節點服務器,然後啟動存儲節點服務器,最後才啟動SQL節點服務器:

在管理節點服務器上,執行以下命令啟動MGM節點進程:
# /opt/webstack/mysql/bin/ndb_mgmd -f /etc/config.ini
必須用參數「-f」或「--config-file」告訴 ndb_mgm 配置文件所在位置,默認是在ndb_mgmd相同目錄下。

在每台存儲節點服務器上,如果是第一次啟動ndbd進程的話,必須先執行以下命令:
# /opt/webstack/mysql/bin/ndbd --initial
注意,僅應在首次啟動ndbd時,或在備份/恢復數據或配置文件發生變化後重啟ndbd時使用「--initial」參數。因為該參數會使節點刪除由早期ndbd實 例創建的、用於恢復的任何文件,包括用於恢復的日誌文件。
如果不是第一次啟動,直接運行如下命令即可:
# /opt/webstack/mysql/bin/ndbd

最後,運行以下命令啟動SQL節點服務器:
#/opt/webstack/mysql/bin/mysqld_safe &

如果一切順利,也就是啟動過程中沒有任何錯誤信息出現,那麼就在管理節點服務器上運行如下命令:
# /opt/webstack/mysql/bin/ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 @10.0.0.3 (Version: 5.0.67, Nodegroup: 0, Master)
id=3 @10.0.0.4 (Version: 5.0.67, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.0.0.1 (Version: 5.0.67)

[mysqld(API)] 2 node(s)
id=4 @10.0.0.2 (Version: 5.0.67)
id=5 @10.0.0.5 (Version: 5.0.67)
具體的輸出內容可能會略有不同,這取決於你所使用的MySQL版本。
5)創建數據庫表
與沒有使用 Cluster的MySQL相比,在MySQL Cluster內操作數據的方式沒有太大的區別。執行這類操作時應記住兩點:
(1) 表必須用ENGINE=NDB或ENGINE=NDBCLUSTER選項創建,或用ALTER TABLE選項更改,以使用NDB Cluster存儲引擎在 Cluster內複製它們。如果使用mysqldump的輸出從已有數據庫導入表,可在文本編輯器中打開SQL腳本,並將該選項添加到任何表創建語句,或 用這類選項之一 替換任何已有的ENGINE(或TYPE)選項。
(2)另外還請記住,每個NDB表必須有一個主鍵。如果在創建表時用戶未定義主鍵,NDB Cluster存儲引擎將自動生成隱含的主鍵。(註釋:該隱含鍵也將佔用空間,就像任何其他的表索引一樣。由於沒有足夠的內存來容納這些自動創建的鍵,出現問題並不罕見 )。

下面是一個例子:
在zone2上,創建數據表,插入數據:
# mysql
mysql> create database testdb;
mysql> use testdb;
mysql> create table city(
mysql> id mediumint unsigned not null auto_increment primary key,
mysql> name varchar(20) not null default ''
mysql> ) engine = ndbcluster default charset utf8;
mysql> insert into city values(1, 'city1');
mysql> insert into city values(2, 'city2');

在zone5上,查詢數據:
# mysql
mysql> create database testdb;
mysql> select * from city;
+-----------+
|id | name |
+-----------+
|1 | city1 |
+-----------+
|2 | city2 |
+-----------+

檢查mysql cluster
停掉數據存儲節點zone3,在管理節點上可以看到zone4接替服務
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from zone3)
id=3 @10.0.0.4 (Version: 5.0.67, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.0.0.1 (Version: 5.0.67)

[mysqld(API)] 2 node(s)
id=4 @10.0.0.2 (Version: 5.0.67)
id=5 @10.0.0.5 (Version: 5.0.67)

這時我再停掉zone5,可以看到zone2還在服務
ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=2 (not connected, accepting connect from zone3)
id=3 @10.0.0.4 (Version: 5.0.67, Nodegroup: 0, Master)

[ndb_mgmd(MGM)] 1 node(s)
id=1 @10.0.0.1 (Version: 5.0.67)

[mysqld(API)] 2 node(s)
id=4 @10.0.0.2 (Version: 5.0.67)
id=5 (not connected, accepting connect from zone5)

因此,配置正確

6)安全關閉
要想關閉 Cluster,可在MGM節點所在的機器上,輸入下述命令:
# /opt/webstack/mysql/bin/ndb_mgm -e shutdown
在SQL節點上運行以下命令關閉SQL節點的mysqld服務:
#/opt/webstack/mysql/bin/mysqladmin –u root shutdown

[MySQL] - back and restore

Single User Mode:connections to all other API nodes are closed gracefully and all running transactions are aborted. No new transactions are permitted to start.
ndb_mgm> ENTER SINGLE USER MODE {node ID}
ndb_mgm> EXIT SINGLE USER MODE

cluster backup:
1.backup主要分為三大部份
a)meatadata(BACKUP-backup_id.node_id.ctl): 所有database tables的names and definitions.
b)table records(BACKUP-backup_id-0.node_id.data): 實際上所有tables的資料.
c)Transaction log(BACKUP-backup_id.node_id.log): A sequential record telling how and when data was stored in the database.

Cluster backups are created by default in the BACKUP subdirectory of the DataDir on each data node.

2.做法:
a)啟動 ndb_mgm
b)mgm> START BACKUP [NOWAIT | WAIT STARTED | WAIT COMPLETED]
c)check %mysql_homedir%/data/BACKUP/

cluster restore:
1.針對每次 backup 產出的files(three file, .ctl;.data,.log),都需執行一次ndb_restore
2.在進行restore前,請使用single user mode.
3.ndb_restore syntax:
ndb_restore [-c connectstring] -n node_id [-s] [-m] -b backup_id -r [backup_path=]/path/to/backup/files [-e]
-n :is specify the node ID of the data node.
-m :確保metadata重新載入(re-create database tables),但這也要有一個空的database來執行這項工作(ndbd --initial)
-s(skip-table-check) 忽略 data 和 table schema 不合的錯誤.
-a :data backed up from a column of a given type can generally be restored to a column using a 「larger, similar」 type.
-b :used to specify the ID or sequence number of the backup(與backup時用的id相同).
-e :adds (or restores) epoch information to the cluster replication status table. This is useful for starting replication on a MySQL Cluster replication slave.
4.於config.ini裡保留一個空的[api] or [mysqld]的section,這個section主要用來使用single user mode時,給該cluster API node使用.
5.mysql 5.1.18後提供針對databases or tables的ndb_restore.syntax如下:
ndb_restore other_options db_name_1 [db_name_2[, db_name_3][, ...] | tbl_name_1[, tbl_name_2][, ...]]
6.configuration for cluster backup:
a)BackupDataBufferSize: The amount of memory used to buffer data before it is written to disk.
b)BackupLogBufferSize: The amount of memory used to buffer log records before these are written to disk.
c)BackupMemory: The total memory allocated in a database node for backups. This should be the sum of the memory allocated for the backup data buffer and the backup log buffer.
d)BackupWriteSize: The default size of blocks written to disk. This applies for both the backup data buffer and the backup log buffer.
e)BackupMaxWriteSize: The maximum size of blocks written to disk. This applies for both the backup data buffer and the backup log buffer.

resore時注意事項:
有一個 Bug #25918 ndb_restore fails when restoring a backup of a disk-data cluster
要解決此問題,可於每個 datanode下 %mysqldir%/data/ndb_xx_fs/所有files and dirs先備份到其它地方(mv),之後再以ndbd --initial重啟datanode後~再行執行ndb_restore就ok了.

[MySQL] - cluster setting....

MySQL Cluster最主要是要設定三種Node(Data Node, SQL Node, Manage Node), 使用的Storage Engine為ndbcluster,

本次設定的機器如下:

1.172.16.1.136: Apache+moodle+SQL Node+MGM Node (1CPU,768M RAM)

2.172.16.1.48: Data Node (1CPU,768M RAM)

3.172.16.1.98: Data Node (1CPU,768M RAM)

本次設定各軟體的版本如下:

OS : opensolaris 2008.05

MySQL:

Apache: 2.2

Moodle: 1.9.7

主要是要說明MySQL Cluster的安裝與設定所以apache及moodle請參考其文件安裝,這裡只說明主要針對 MySQL

先說明MySQL的安裝



待續.......

想再往下寫時看到網路上有一篇....

可參考這個blog來實作

http://blogs.sun.com/hasham/entry/setting_up_mysql_cluster_using

[JBoss] - Binding Manager setting

Sometimes it is useful to run more than one instance of JBoss on the same server. Other instances can be set up for development, testing or for quality assurance, etc.

ConfiguringMultipleJBossInstancesOnOneMachine on the JBoss wiki has the basic info but does not seem to be up to date for the newer servers (4.2.x+).

Before setting up another instance, other outside resources may also have to be 'cloned' if it is possible that they may conflict, such as setting up another instance of a database with a separate connection url and matching datasource for JBoss

1. copy %Jboss_home%\docs\sample\doc\binding-manager\sample-bindings.xml to %Jboss_home%\server\ , and Rename to port-bindings.xml.

2. edit %Jboss_home\server\xxx\conf\jboss-service.xml and remove the comments to enable the "ServiceBindingManager" mbean.

and Under "Socket transport Connector", in the "Configuration" section, serverBindPort must be changed to another value or it will conflict with the default (4446)

3. edit %Jboss_home\ server\xxx\deploy\ejb3.deployer\META-INF\jboss-service.xml, for the remoting.transport.Connector mbean, port 3873 must be changed to another

value or it will conflict with the default.


多個node,每個node都需更動第2,3兩個設定.

Separate instances are run with "run -c node1" etc


[JBoss] - HermesJMS

OS: winXP

APP: JBoss 4.2.2GA

關於其它的MQ設定請參考: http://www.hermesjms.com/confluence/display/HJMS/Home , 裡面解說的更詳盡.

1. Add a ClassPath Group: click "Configuration" and select "providers" tab, 增加一個group(ie, JBoss422) ,然後加入相對應的jar file

a) client jar (JBOSS_HOME/client/) : jboss-client.jar;jbossall-client.jar;jbossmq-client.jar;jmx-invoker-adaptor-client.jar;jnp-client.jar

b) server lib(JBOSS_HOME/lib) : concurrent.jar; jboss-jmx.jar

c) 加入檔案後會問說要不要scan因為使用jndi所以, 選擇不要scan.

2. Create new JNDI InitialContext : 給個名字(ie, JBoss)然後,輸入以下欄位

loader : 選擇1所建立的名字( ie,JBoss422)

providerURL : jnp://localhost:1199 (jms 的位址與埠)

initialContextFactory : org.jnp.interfaces.NamingContextFactory

urlPkgPrefixes : org.jnp.interfaces:org.jboss.naming

securityCredentials / securityPrincipal : 輸入連線所需帳號及密碼 (admin/admin)

3.第2步建好後於左方的contexts會出現剛剛所建立的context (JBoss),點兩下於右邊會出現很多的connection factory, 這裡選擇invokers下的UIL2XAConnectionFactory

對它點右鍵,選擇"Create new connection" ,接著給這個session一個名字(ie, JBoss)

4.接著會在左方的session裡看到剛剛建的session (JBoss), 按右鍵選discovery,

[JBoss] - APACHE2.2.x + Mod_jk1.2.x + JBoss4.2.2GA 實作 Load Balancing

1. download apache 2.2

2. download mod_jk1.2.x

3. 修改APACH_HOME\conf\httpd

加入 Include conf/mod-jk.conf

4. 於APACHE_HOME\conf\ 新增 mod-jk.conf ,內容如下:

# Load mod_jk module
# Specify the filename of the mod_jk lib
LoadModule jk_module modules/mod_jk.so

# Where to find workers.properties
JkWorkersFile conf/workers.properties

# Where to put jk logs
JkLogFile logs/mod_jk.log

# Set the jk log level [debug/error/info]
JkLogLevel info

# Select the log format
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

# JkOptions indicates to send SSK KEY SIZE
# Notes:
# 1) Changed from +ForwardURICompat.
# 2) For mod_rewrite compatibility, use +ForwardURIProxy (default since 1.2.24)
# See http://tomcat.apache.org/security-jk.html
JkOptions +ForwardKeySize +ForwardURICompatUnparsed -ForwardDirectories

# JkRequestLogFormat
JkRequestLogFormat "%w %V %T"

# Mount your applications
JkMount /__application__/* loadbalancer

# You can use external file for mount points.
# It will be checked for updates each 60 seconds.
# The format of the file is: /url=worker
# /examples/*=loadbalancer
JkMountFile conf/uriworkermap.properties

# Add shared memory.
# This directive is present with 1.2.10 and
# later versions of mod_jk, and is needed for
# for load balancing to work properly
# Note: Replaced JkShmFile logs/jk.shm due to SELinux issues. Refer to
# https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=225452
JkShmFile run/jk.shm

#JkMountCopy all

# Add jkstatus for managing runtime data

JkMount status
Order deny,allow
Deny from all
Allow from all

5. 於APACHE_HOME\conf\ 下新增 workers.properties , 內容如下: (更改各node的ip)

# Define list of workers that will be used
# for mapping requests
# The configuration directives are valid
# for the mod_jk version 1.2.18 and later
#
worker.list=loadbalancer,status


# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8009
worker.node1.host=172.16.1.40
worker.node1.type=ajp13
worker.node1.lbfactor=1
# worker.node1.connection_pool_size=10 (1)

# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8009
worker.node2.host=172.16.1.46
worker.node2.type=ajp13
worker.node2.lbfactor=1
# worker.node1.connection_pool_size=10 (1)

# Load-balancing behaviour
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2

# Status worker for managing load balancer
worker.status.type=status

6. 於APACHE_HOME\conf\ 下新增 uriworkermap.properties , 內容如下: (/DynWebProject是我的web application的webcontent)

# Simple worker configuration file
#

# Mount the Servlet context to the ajp13 worker
/jmx-console=loadbalancer
/jmx-console/*=loadbalancer
/web-console=loadbalancer
/web-console/*=loadbalancer
/DynWebProject/=loadbalancer
/DynWebProject/*=loadbalancer

7.重啟APACHE

8. 如要設定sticky session mode, 可於各台JBoss的JBOSS_HOME\server\your Configuration\deploy\jboss-web.deployer\server.xml設定jvmRoute(對應node)

Engine name="jboss.web" defaultHost="localhost" jvmRoute="nodeX"