OpenSuse Build Service/Installation: Difference between revisions

From Maemo Wiki
Jump to navigationJump to search
imported>lbt
imported>xfade
Remove Power/ppc section
 
(19 intermediate revisions by 4 users not shown)
Line 8: Line 8:


This Installation Notes covers OBS installation and setup for '''version 1.7'''.  You can always refer to latest documentation also available on-line: http://gitorious.org/opensuse/build-service/blobs/raw/master/dist/README.SETUP
This Installation Notes covers OBS installation and setup for '''version 1.7'''.  You can always refer to latest documentation also available on-line: http://gitorious.org/opensuse/build-service/blobs/raw/master/dist/README.SETUP
== Networking Overview ==
The Front End (FE) machine provides two major services:
* the webui on build.obs and
* the API on apo.obs;
It is also a useful holding point for the download service on download.obs
The Back End (BE) machine provides the backend services including the schedulers, source server and repo server. The repo server (used by the workers to get required rpms) also doubles as the download server via a reverse-proxy on the FE.
<source lang="bash">
10.1.1.1        host.obs.maemo.org
10.1.1.10      fe.obs.maemo.org build.obs.maemo.org api.obs.maemo.org download.obs.maemo.org
10.1.1.11      be.obs.maemo.org src.obs.maemo.org repo.obs.maemo.org
10.1.1.51      w1.obs.maemo.org
10.1.1.52      w2.obs.maemo.org
10.1.1.53      w3.obs.maemo.org
10.1.1.54      w4.obs.maemo.org
10.1.1.55      w5.obs.maemo.org
10.1.1.56      w6.obs.maemo.org
10.1.1.57      w7.obs.maemo.org
10.1.1.58      w8.obs.maemo.org
10.1.1.59      w9.obs.maemo.org
10.1.1.60      w10.obs.maemo.org
10.1.1.61      w11.obs.maemo.org
10.1.1.62      w12.obs.maemo.org
</source>
<hr>


== Creating Xen VMs ==
== Creating Xen VMs ==
Line 18: Line 47:
</source>
</source>


Create some xen volumes
Create some Xen volumes
<source lang='bash'>
<source lang='bash'>
lvcreate -L 10G VG_data -n fe_root
lvcreate -L 10G VG_data -n fe_root
Line 33: Line 62:
</source>
</source>


Prpeare an opensuse minimal image:
Prepare an openSUSE minimal image:
<source lang='bash'>
<source lang='bash'>
ROOTFS=/data/11.2min/image-root
mkdir /data/11.2min
mkdir /data/11.2min
kiwi --prepare suse-11.2-JeOS --root /data/11.2min/image-root --add-profile xenFlavour --add-package less --add-package iputils
kiwi --prepare suse-11.2-JeOS --root $ROOTFS --add-profile xenFlavour --add-package less --add-package iputils
</source>
</source>


Update the config & modules:
Update the config & modules:
<source lang='bash'>
<source lang='bash'>
cp -a /lib/modules/2.6.31.12-0.2-xen /data/11.2min/image-root/lib/modules/
ROOTFS=/data/11.2min/image-root
cp -a /lib/modules/2.6.31.12-0.2-xen $ROOTFS/lib/modules/


echo default 10.1.1.1 > /data/11.2min/image-root/etc/sysconfig/network/routes
echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes
echo NETCONFIG_DNS_POLICY=\"\" >> /data/11.2min/image-root/etc/sysconfig/network/config
echo NETCONFIG_DNS_POLICY=\"\" >> $ROOTFS/etc/sysconfig/network/config
echo nameserver 8.8.8.8 > /data/11.2min/image-root/etc/resolv.conf
echo nameserver 8.8.8.8 > $ROOTFS/etc/resolv.conf
echo default 10.1.1.1 > /data/11.2min/image-root/etc/sysconfig/network/routes
echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes
cat << EOF >/data/11.2min/image-root/etc/sysconfig/network/ifcfg-eth0
cat << EOF >$ROOTFS/etc/sysconfig/network/ifcfg-eth0
BOOTPROTO='static'
BOOTPROTO='static'
BROADCAST=''
BROADCAST=''
STARTMODE='onboot'
STARTMODE='onboot'
EOF
EOF
echo /dev/xvda1 swap swap defaults 0 0 >> $ROOTFS/etc/fstab
</source>
</source>


Line 103: Line 135:
</source>
</source>


== Networking Overview ==
== Interim config ==
<source lang='bash'>
zypper in wget less iputils terminfo emacs
</source>
 
== Installing the Backend ==
 
On this host we need also to setup openSUSE Tools repository:
<source lang='bash'>
cd /etc/zypp/repos.d/;
wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo
zypper ref
# Accept the trust key
</source>
 
Install:
<source lang='bash'>
zypper in obs-server obs-signer obs-utils createrepo dpkg nfs-client
# obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation
# createrepo & dpkg are only recommends
# NFS client is needed as we use an NFS share for host/BE interchange
</source>
Configure Scheduler architectures
<source lang='bash'>
vi /etc/sysconfig/obs-server
OBS_SCHEDULER_ARCHITECTURES="i586 armv5el armv7el“
</source>
 
/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shared to users.
<source lang='bash'>
vi /usr/lib/obs/server/BSConfig.pm
#add
$hostname="be.obs.maemo.org";
 
our $srcserver = "http://src.obs.maemo.org:5352";
our $reposerver = "http://repo.obs.maemo.org:5252";
our $repodownload = "http://$hostname/repositories";
</source>
 
Configure services as daemons
<source lang='bash'>
chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher obswarden obssigner


<source lang="bash">
#Check them
cat <<XX >> /etc/hosts
chkconfig -l obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher
10.1.1.1        host.obs.maemo.org
</source>
10.1.1.10      fe.obs.maemo.org build.obs.maemo.org api.obs.maemo.org
Start Services
10.1.1.11      be.obs.maemo.org src.obs.maemo.org repo.obs.maemo.org
<source lang='bash'>
10.1.1.51      w1.obs.maemo.org
  rcobsrepserver start
XX
  rcobssrcserver start
  rcobsscheduler start
  rcobsdispatcher start
  rcobspublisher  start
</source>
</source>
For version 1.7 there are a new services. You can start them as well:
* '''obswarden'''
  It checks if build hosts are dying and cleans up hanging builds
* '''obssigner'''
  It is used to sign packages via the obs-sign daemon. You need to configure
  it in BSConfig.pm before you can use it.
* '''obsservice'''
  This is the source service daemon. OBS 1.7 just comes with a download
  service so far. This feature is considered to be experimental so far,
  but can be already extended with own services.


<hr>
'''Install Lighttpd'''
 
lighttpd also needs to be available on backend server. This is required to provide directory listing on the repositories available on this server when an http/s request to maemo-repo is made through web ui.
 
<source lang='bash'>
zypper in lighttpd
</source>


== Interim config ==
Create a new file under /etc/lighttpd/vhosts.d/. It can be obs.conf as well, and add:
<source lang='bash'>
<source lang='bash'>
zypper in wget less iputils
vi /etc/lighttpd/vhosts.d/obs.conf
 
$HTTP["host"] =~ "repo.obs.maemo.org" {
  server.name = "repo.obs.maemo.org"
 
  server.document-root = "/srv/obs/repos/"
  dir-listing.activate = "enable"
}
</source>
</source>


To enable vhosts, remember to '''uncomment''' the following in the 'custom includes':


== Installing the Frontend (WebUI and API) ==
<source lang='bash'>
vi /etc/lighttpd/lighttpd.conf
##
  ## custom includes like vhosts.
  ##
  #include "conf.d/config.conf"
  # following line uncommented as per
  # /usr/share/doc/packages/obs-api/README.SETUP
  include_shell "cat vhosts.d/*.conf" 
</source>


Start with a minimal Suse install and then add Tools repository where OBS 1.7 is available.
Start lighttpd
<source lang='bash'>
#first add it as deamon
chkconfig --add lighttpd
rclighttpd start
</source>
 
== Installing the FrontEnd (WebUI and API) ==
 
Start with a minimal SUSE install and then add Tools repository where OBS 1.7 is available.
<source lang='bash'>
<source lang='bash'>
cd /etc/zypp/repos.d/;
cd /etc/zypp/repos.d/;
Line 134: Line 252:
Install obs-api (It's going to install lighttpd webserver by dependency for you).
Install obs-api (It's going to install lighttpd webserver by dependency for you).
<source lang='bash'>
<source lang='bash'>
zypper in obs-api
zypper in obs-api memcached
</source>
</source>


'''Setup Mysql'''
'''Setup MySQL'''


Mysql server needs to be installed and configured to start as deamon
MySQL server needs to be installed and configured to start as daemon
<source lang='bash'>
<source lang='bash'>
chkconfig --add mysql
chkconfig --add mysql
rcmysql start
rcmysql start
</source>
</source>
Setup a secure installation, if it's the first time starting mysql
Setup a secure installation, if it's the first time starting MySQL
<source lang='bash'>
<source lang='bash'>
/usr/bin/mysql_secure_installation
/usr/bin/mysql_secure_installation
</source>
</source>
The frontend instance holds 2 applications, the api and the webui. Each one need a database created
The frontend instance holds 2 applications, the API and the webui. Each one need a database created
<source lang='bash'>
<source lang='bash'>
mysql -u root -p
mysql -u root -p
Line 164: Line 282:
FLUSH PRIVILEGES;
FLUSH PRIVILEGES;
</source>
</source>
Configure your MySQL user and password in the "production:" section of the api config:  
Configure your MySQL user and password in the "production:" section of the API config:  
<source lang='bash'>
<source lang='bash'>
vi /srv/www/obs/api/config/database.yml
vi /srv/www/obs/api/config/database.yml
Line 174: Line 292:
   password: ************
   password: ************
</source>
</source>
Do the same for the webui. It's configured, by default to use sqlite, but since we're configuring the cluster for production environment, let's bind it to mysql:
Do the same for the webui. It's configured, by default to use SQLite, but since we're configuring the cluster for production environment, let's bind it to mysql:


<source lang='bash'>
<source lang='bash'>
Line 195: Line 313:
You can check the migration was successful verifying the “migrated” message at the end of each statement.
You can check the migration was successful verifying the “migrated” message at the end of each statement.


'''Setup and configure lighttpd for the api and webui'''
'''Setup and configure lighttpd for the API and webui'''


You need to setup the correct hostnames to where webui, api and repo server are going to point to
You need to setup the correct hostnames to where webui, API and repo server are going to point to
<source lang='bash'>
<source lang='bash'>
vi /etc/lighttpd/vhosts.d/obs.conf
vi /etc/lighttpd/vhosts.d/obs.conf
Line 203: Line 321:
   rails_app  = "webui"
   rails_app  = "webui"
   rails_root  = "/srv/www/obs/webui"
   rails_root  = "/srv/www/obs/webui"
   rails_procs = 5
   rails_procs = 3
   # production/development are typical values here
   # production/development are typical values here
   rails_mode  = "production"
   rails_mode  = "production"
Line 214: Line 332:
   rails_app  = "api"
   rails_app  = "api"
   rails_root  = "/srv/www/obs/api"
   rails_root  = "/srv/www/obs/api"
   rails_procs = 5
   rails_procs = 3
   # production/development are typical values here
   # production/development are typical values here
   rails_mode  = "production"
   rails_mode  = "production"
Line 222: Line 340:
   include "vhosts.d/rails.inc"
   include "vhosts.d/rails.inc"
}
}
<$HTTP["host"] =~ "download" {
$HTTP["host"] =~ "download" {
  server.name = "download.obs.maemo.org"
# This should point to an rsync populated download repo
server.name = "download.obs.maemo.org"
#  server.document-root = "/srv/obs/repos/"


   server.document-root = "/srv/obs/repos/"
   proxy.server = ( "" => ( (
        "host" => "10.1.1.11",
        "port" => 80
      ))
  )
}
}
/source>
/source>
Line 240: Line 364:
   include_shell "cat vhosts.d/*.conf"   
   include_shell "cat vhosts.d/*.conf"   
</source>
</source>
Also, the modules "mod_magnet", "mod_rewrite" and fastcgi need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf:
Also, the modules "mod_magnet", "mod_rewrite" and FastCGI need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf:
<source lang='bash'>
<source lang='bash'>
vi /etc/lighttpd/modules.conf
vi /etc/lighttpd/modules.conf
Line 282: Line 406:
Ensure lighttpd and obs ui helpers start:
Ensure lighttpd and obs ui helpers start:
<source lang='bash'>
<source lang='bash'>
chkconfig --add memcached
chkconfig --add lighttpd
chkconfig --add lighttpd
chkconfig --add obsapidelayed
chkconfig --add obsapidelayed
chkconfig --add obswebuidelayed
chkconfig --add obswebuidelayed


rcmemcached start
rclighttpd start
rclighttpd start
rcobsapidelayed start
rcobsapidelayed start
Line 291: Line 417:
</source>
</source>


ligthttpd user and group need to be the onwer of api and webui dirs (as well as log and tmp):
ligthttpd user and group need to be the owner of api and webui dirs (as well as log and tmp):
<source lang='bash'>
<source lang='bash'>
chown -R lighttpd.lighttpd /srv/www/obs/{api,webui}
chown -R lighttpd.lighttpd /srv/www/obs/{api,webui}
</source>
</source>


== Installing the Backend ==
== Installing the Workers ==
 
The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.


On this host we need also to setup openSuse Tools repository:
The same openSUSE Tools repository addition must be done for each worker.
<source lang='bash'>
<source lang='bash'>
cd /etc/zypp/repos.d/;
cd /etc/zypp/repos.d/;
Line 306: Line 434:
</source>
</source>


Install:
Install worker:
<source lang='bash'>
<source lang='bash'>
zypper in obs-server obs-signer obs-utils
zypper in obs-worker quemu-svn mount-static bash-static
# obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation
</source>
</source>
Configure Scheduler architectures
(mount-static and bash-static are needed on the worker for rpm cross-compile to work)
 
Edit the file /etc/sysconfig/obs-worker in order to point to correct repository server.
<source lang='bash'>
<source lang='bash'>
vi /etc/sysconfig/obs-server
vi /etc/sysconfig/obs-worker
OBS_SCHEDULER_ARCHITECTURES="i586 armv5el armv7el“
OBS_SRC_SERVER="src.obs.maemo.org:5352"
OBS_REPO_SERVERS="repo.obs.maemo.org:5252"
OBS_VM_TYPE="none"
</source>
</source>


/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shered to users.
Each worker has 16 CPUs, so 16 workers need to be started.
 
Start the worker service:
<source lang='bash'>
<source lang='bash'>
vi /usr/lib/obs/server/BSConfig.pm
chkconfig --add obsworker
#add
$hostname="be.obs.maemo.org";


our $srcserver = "http://src.obs.maemo.org:5352";
rcobsworker start
our $reposerver = "http:/repo.obs.maemo.org:5252";
</source>
</source>


Configure services as deamons
== Tuning ==
<source lang='bash'>
The lighttpd config was reduced to 3 api/webui child processes from 5.
chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher obswarden obssigner
 
== Revise VG setup ==
 
pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1
pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2
pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part3
pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part4


#Check them
Now create the OBS VG for the host Xen worker:
chkconfig -l obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher
vgcreate OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1
</source>
  vgextend OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2
Start Services
<source lang='bash'>
  rcobsrepserver start
  rcobssrcserver start
  rcobsscheduler start
  rcobsdispatcher start
  rcobspublisher start
</source>
For version 1.7 there are a new services. You can start them as well:
* '''obswarden'''
  It checks if build hosts are dieing and cleans up hanging builds
* '''obssigner'''
  It is used to sign packages via the obs-sign daemon. You need to configure
  it in BSConfig.pm before you can use it.
* '''obsservice'''
  The is the source service daemon. OBS 1.7 just comes with a download
  service so far. This feature is considered to be experimental so far,
  but can be already extended with own services.


'''Install Lighttpd'''


lighttpd also needs to be available on backend server. This is required to provide directory listing on the repositories available on this server when an http/s request to maemo-repo is made through web ui.
lvcreate -L 20G -n cache OBS
mkfs -text4 /dev/OBS/cache


<source lang='bash'>
for i in 1 2 3 4 5 6 7 8 9 10
zypper in lighttpd
do
</source>
  lvcreate -L 4G -n worker_root$i OBS
  mkfs -text4 /dev/OBS/worker_root$i
  lvcreate -L 1G -n worker_swap$i OBS
  mkswap -f /dev/OBS/worker_swap$i
done


Create a new file under /etc/lighttpd/vhosts.d/. It can be obs.conf as well, and add:
Note in 1.7.3 obsstoragesetup makes /OBS.worker/root1/root but obsworker looks for /OBS.worker/root_1/root (root1 vs root_1)
<source lang='bash'>
vi /etc/lighttpd/vhosts.d/obs.conf


$HTTP["host"] =~ "repo.obs.maemo.org" {
  server.name = "repo.obs.maemo.org"


  server.document-root = "/srv/obs/repos/"
Also ensure the host has :
  dir-listing.activate = "enable"
obs-worker (1.7.3)
}
qemu-svn
</source>


To enable vhosts, remember to '''uncomment''' the following in the 'custom includes':
In order to run arm binaries in the Xen chroot we need to ensure the initrd inserts binfmt_misc
in /etc/sysconfig/kernel, set
  DOMU_INITRD_MODULES="xennet xenblk binfmt_misc
then
mkinitrd -k vmlinuz-2.6.31.12-0.2-xen -i initrd-2.6.31.12-0.2-xen-binfmt -B


<source lang='bash'>
On the backend machine edit /usr/lib/build/xen.conf to use the correct initrd
vi /etc/lighttpd/lighttpd.conf
##
  ## custom includes like vhosts.
  ##
  #include "conf.d/config.conf"
  # following line uncommented as per
  # /usr/share/doc/packages/obs-api/README.SETUP
  include_shell "cat vhosts.d/*.conf
</source>


Start lighttpd
== Remove Power/PPC from monitor page==
<source lang='bash'>
#first add it as deamon
chkconfig --add lighttpd
rclighttpd start
</source>


== Installing the Workers ==
We don't target PPC, so best remove it from the build monitor page:


The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.
On FE in webui/app/views/monitor/_plots.rhtml


The same openSuse Tools repository addition must be done for each worker.
Use html comments to disable the Power line:
<source lang='bash'>
cd /etc/zypp/repos.d/;
wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo
zypper ref
# Accept the trust key
</source>


Install worker:
<pre><!--<%= render :partial => 'graphs_arch', :locals => { :title => "Power", :prefix => 'ppc_' } %>--></pre>
<source lang='bash'>
zypper in obs-worker
</source>


Edit the file /etc/sysconfig/obs-worker in order to point to correct repository server.
On FE webui/config/environment.rb
<source lang='bash'>
vi /etc/sysconfig/obs-worker
OBS_SRC_SERVER="src.obs.maemo.org:5352"
OBS_REPO_SERVERS="repo.obs.maemo.org:5252"
</source>


Each worker has 16 CPUs, so 16 workers need to be started.
Change the MONITOR_IMAGEMAP section to:


Start the worker service:
<pre>MONITOR_IMAGEMAP = {.
<source lang='bash'>
      'pc_waiting' => [
chkconfig --add obsworker
        ["i586", 'waiting_i586'],
        ["x86_64", 'waiting_x86_64'] ],
      'pc_blocked' => [
        ["i586", 'blocked_i586' ],
        ["x86_64", 'blocked_x86_64'] ],
      'pc_workers' => [
        ["idle", 'idle_x86_64' ],
        ['building', 'building_x86_64' ] ],
#      'ppc_waiting' => [
#        ["ppc", 'waiting_ppc'],
#        ["ppc64", 'waiting_ppc64'] ],
#      'ppc_blocked' => [
#        ["ppc", 'blocked_ppc' ],
#        ["ppc64", 'blocked_ppc64'] ],
#      'ppc_workers' => [
#        ["idle", 'idle_ppc64' ],
#        ['building', 'building_ppc64' ] ],
      'arm_waiting' => [
        ["armv5", 'waiting_armv5el'],
        ["armv7", 'waiting_armv7el'] ],
      'arm_blocked' => [
        ["armv5", 'blocked_armv5el' ],
        ["armv7", 'blocked_armv7el'] ],
      'arm_workers' => [
        ["idle", 'idle_armv7el' ],
        ['building', 'building_armv7el' ] ],
    }


rcobsworker start
</pre>
</source>
[[Category:OpenSuse build service]]

Latest revision as of 08:29, 15 July 2010

Maemo OBS Cluster Installation Notes

Introduction

Maemo OBS Clusters are divided into 3 instances:

  • Frontend – Containing Webclient, OBS API, Mysql server
  • Backend/Repository Server – Where OBS-Server, dispatcher, scheduler and repository server are installed
  • Workers – Installation of obs-worker package

This Installation Notes covers OBS installation and setup for version 1.7. You can always refer to latest documentation also available on-line: http://gitorious.org/opensuse/build-service/blobs/raw/master/dist/README.SETUP

Networking Overview

The Front End (FE) machine provides two major services:

  • the webui on build.obs and
  • the API on apo.obs;

It is also a useful holding point for the download service on download.obs

The Back End (BE) machine provides the backend services including the schedulers, source server and repo server. The repo server (used by the workers to get required rpms) also doubles as the download server via a reverse-proxy on the FE.

<source lang="bash"> 10.1.1.1 host.obs.maemo.org 10.1.1.10 fe.obs.maemo.org build.obs.maemo.org api.obs.maemo.org download.obs.maemo.org 10.1.1.11 be.obs.maemo.org src.obs.maemo.org repo.obs.maemo.org 10.1.1.51 w1.obs.maemo.org 10.1.1.52 w2.obs.maemo.org 10.1.1.53 w3.obs.maemo.org 10.1.1.54 w4.obs.maemo.org 10.1.1.55 w5.obs.maemo.org 10.1.1.56 w6.obs.maemo.org 10.1.1.57 w7.obs.maemo.org 10.1.1.58 w8.obs.maemo.org 10.1.1.59 w9.obs.maemo.org 10.1.1.60 w10.obs.maemo.org 10.1.1.61 w11.obs.maemo.org 10.1.1.62 w12.obs.maemo.org </source>


Creating Xen VMs

Based on http://en.opensuse.org/Build_Service/KIWI/Cookbook <source lang='bash'> zypper ar http://download.opensuse.org/repositories/Virtualization:/Appliances/openSUSE_11.2/ Virtualization:Appliances zypper refresh zypper in kiwi kiwi-templates kiwi-desc-xenboot squashfs </source>

Create some Xen volumes <source lang='bash'> lvcreate -L 10G VG_data -n fe_root lvcreate -L 2G VG_data -n fe_swap mkswap /dev/VG_data/fe_swap

lvcreate -L 10G VG_data -n be_root lvcreate -L 2G VG_data -n be_swap mkswap /dev/VG_data/be_swap

lvcreate -L 10G VG_data -n w1_root lvcreate -L 2G VG_data -n w1_swap mkswap /dev/VG_data/w1_swap </source>

Prepare an openSUSE minimal image: <source lang='bash'> ROOTFS=/data/11.2min/image-root mkdir /data/11.2min kiwi --prepare suse-11.2-JeOS --root $ROOTFS --add-profile xenFlavour --add-package less --add-package iputils </source>

Update the config & modules: <source lang='bash'> ROOTFS=/data/11.2min/image-root cp -a /lib/modules/2.6.31.12-0.2-xen $ROOTFS/lib/modules/

echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes echo NETCONFIG_DNS_POLICY=\"\" >> $ROOTFS/etc/sysconfig/network/config echo nameserver 8.8.8.8 > $ROOTFS/etc/resolv.conf echo default 10.1.1.1 > $ROOTFS/etc/sysconfig/network/routes cat << EOF >$ROOTFS/etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static' BROADCAST= STARTMODE='onboot' EOF echo /dev/xvda1 swap swap defaults 0 0 >> $ROOTFS/etc/fstab </source>


Copy to each of the VM root disks <source lang='bash'> mkfs -text3 /dev/VG_data/fe_root mount /dev/VG_data/fe_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ echo fe.obs.maemo.org > /mnt/lvm/etc/HOSTNAME echo "IPADDR='10.1.1.10/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0 umount /mnt/lvm

mkfs -text3 /dev/VG_data/be_root mount /dev/VG_data/be_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ echo be.obs.maemo.org > /mnt/lvm/etc/HOSTNAME echo "IPADDR='10.1.1.11/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0 umount /mnt/lvm

mkfs -text3 /dev/VG_data/w1_root mount /dev/VG_data/w1_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ echo w1.obs.maemo.org > /mnt/lvm/etc/HOSTNAME echo "IPADDR='10.1.1.51/24'" >> /mnt/lvm/etc/sysconfig/network/ifcfg-eth0 umount /mnt/lvm </source>

Configure some Xen VM configs <source lang='bash'> name='fe' disk=['phy:/dev/VG_data/fe_root,xvda2,w', 'phy:/dev/VG_data/fe_swap,xvda1,w'] vif=['mac=0D:16:3E:40:B5:FE'] memory='1024'

root='/dev/xvda2 ro' kernel='/boot/vmlinuz-2.6.31.12-0.2-xen' ramdisk='/boot/initrd-2.6.31.12-0.2-xen' extra='clocksource=jiffies console=hvc0 xencons=tty'

on_poweroff='destroy' on_reboot='restart' on_crash='restart' </source>

And start them: <source lang='bash'> xm create /etc/xen/fe.cfg xm create /etc/xen/be.cfg xm create /etc/xen/w1.cfg </source>

Interim config

<source lang='bash'> zypper in wget less iputils terminfo emacs </source>

Installing the Backend

On this host we need also to setup openSUSE Tools repository: <source lang='bash'> cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref

  1. Accept the trust key

</source>

Install: <source lang='bash'> zypper in obs-server obs-signer obs-utils createrepo dpkg nfs-client

  1. obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation
  2. createrepo & dpkg are only recommends
  3. NFS client is needed as we use an NFS share for host/BE interchange

</source> Configure Scheduler architectures <source lang='bash'> vi /etc/sysconfig/obs-server OBS_SCHEDULER_ARCHITECTURES="i586 armv5el armv7el“ </source>

/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shared to users. <source lang='bash'> vi /usr/lib/obs/server/BSConfig.pm

  1. add

$hostname="be.obs.maemo.org";

our $srcserver = "http://src.obs.maemo.org:5352"; our $reposerver = "http://repo.obs.maemo.org:5252"; our $repodownload = "http://$hostname/repositories"; </source>

Configure services as daemons <source lang='bash'> chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher obswarden obssigner

  1. Check them

chkconfig -l obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher </source> Start Services <source lang='bash'>

  rcobsrepserver start
  rcobssrcserver start
  rcobsscheduler start
  rcobsdispatcher start
  rcobspublisher  start

</source> For version 1.7 there are a new services. You can start them as well:

  • obswarden
  It checks if build hosts are dying and cleans up hanging builds
  • obssigner
  It is used to sign packages via the obs-sign daemon. You need to configure
  it in BSConfig.pm before you can use it.
  • obsservice
  This is the source service daemon. OBS 1.7 just comes with a download
  service so far. This feature is considered to be experimental so far,
  but can be already extended with own services.

Install Lighttpd

lighttpd also needs to be available on backend server. This is required to provide directory listing on the repositories available on this server when an http/s request to maemo-repo is made through web ui.

<source lang='bash'> zypper in lighttpd </source>

Create a new file under /etc/lighttpd/vhosts.d/. It can be obs.conf as well, and add: <source lang='bash'> vi /etc/lighttpd/vhosts.d/obs.conf

$HTTP["host"] =~ "repo.obs.maemo.org" {

 server.name = "repo.obs.maemo.org"
 server.document-root = "/srv/obs/repos/"
 dir-listing.activate = "enable"

} </source>

To enable vhosts, remember to uncomment the following in the 'custom includes':

<source lang='bash'> vi /etc/lighttpd/lighttpd.conf

 ## custom includes like vhosts.
 ##
 #include "conf.d/config.conf"
 # following line uncommented as per
 # /usr/share/doc/packages/obs-api/README.SETUP
 include_shell "cat vhosts.d/*.conf"  

</source>

Start lighttpd <source lang='bash'>

  1. first add it as deamon

chkconfig --add lighttpd rclighttpd start </source>

Installing the FrontEnd (WebUI and API)

Start with a minimal SUSE install and then add Tools repository where OBS 1.7 is available. <source lang='bash'> cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref

  1. Accept the trust key

</source>

Install obs-api (It's going to install lighttpd webserver by dependency for you). <source lang='bash'> zypper in obs-api memcached </source>

Setup MySQL

MySQL server needs to be installed and configured to start as daemon <source lang='bash'> chkconfig --add mysql rcmysql start </source> Setup a secure installation, if it's the first time starting MySQL <source lang='bash'> /usr/bin/mysql_secure_installation </source> The frontend instance holds 2 applications, the API and the webui. Each one need a database created <source lang='bash'> mysql -u root -p mysql> create database api_production; mysql> create database webui_production; </source> Add obs user to handle these databases <source lang='bash'> GRANT all privileges

     ON api_production.* 
     TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************';

GRANT all privileges

     ON webui_production.* 
     TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************';

FLUSH PRIVILEGES; </source> Configure your MySQL user and password in the "production:" section of the API config: <source lang='bash'> vi /srv/www/obs/api/config/database.yml

  1. change the production section

production:

 adapter: mysql
 database: api_production
 username: obs
 password: ************

</source> Do the same for the webui. It's configured, by default to use SQLite, but since we're configuring the cluster for production environment, let's bind it to mysql:

<source lang='bash'> vi /srv/www/obs/webui/config/database.yml

  1. change the production section

production:

 adapter: mysql
 database: webui_production
 username: obs
 password: ************

</source> Populate the database <source lang='bash'> cd /srv/www/obs/api/ RAILS_ENV="production" rake db:migrate

cd /srv/www/obs/webui/ RAILS_ENV="production" rake db:migrate </source> You can check the migration was successful verifying the “migrated” message at the end of each statement.

Setup and configure lighttpd for the API and webui

You need to setup the correct hostnames to where webui, API and repo server are going to point to <source lang='bash'> vi /etc/lighttpd/vhosts.d/obs.conf $HTTP["host"] =~ "build" {

 rails_app   = "webui"
 rails_root  = "/srv/www/obs/webui"
 rails_procs = 3
 # production/development are typical values here
 rails_mode  = "production"
 log_root = "/srv/www/obs/webui/log"
 include "vhosts.d/rails.inc"

} $HTTP["host"] =~ "api" {

 rails_app   = "api"
 rails_root  = "/srv/www/obs/api"
 rails_procs = 3
 # production/development are typical values here
 rails_mode  = "production"
 log_root = "/srv/www/obs/api/log"
 include "vhosts.d/rails.inc"

} $HTTP["host"] =~ "download" {

  1. This should point to an rsync populated download repo
  2. server.name = "download.obs.maemo.org"
  3. server.document-root = "/srv/obs/repos/"
 proxy.server = ( "" => ( (
       "host" => "10.1.1.11",
       "port" => 80
     ))
 )

} /source>

To enable these vhosts, make sure to uncomment the following in the 'custom includes' section at the bottom of /etc/lighttpd/lighttpd.conf: <source lang='bash'> vi /etc/lighttpd/lighttpd.conf

 ## custom includes like vhosts.
 ##
 #include "conf.d/config.conf"
 # following line uncommented as per
 # /usr/share/doc/packages/obs-api/README.SETUP
 include_shell "cat vhosts.d/*.conf"  

</source> Also, the modules "mod_magnet", "mod_rewrite" and FastCGI need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf: <source lang='bash'> vi /etc/lighttpd/modules.conf

   server.modules = (
     "mod_access",
   #  "mod_alias",
   #  "mod_auth",
   #  "mod_evasive",
   #  "mod_redirect",
     "mod_rewrite",
   #  "mod_setenv",
   #  "mod_usertrack",
   )
   ##
   ## mod_magnet
   ##
   include "conf.d/magnet.conf"
   ##
   ## FastCGI (mod_fastcgi)
   ##
   include "conf.d/fastcgi.conf"

</source> You need also to configure /srv/www/obs/webui/config/environments/production.rb to point to correct server names: <source lang='bash'> vi /srv/www/obs/webui/config/environments/production.rb FRONTEND_HOST = "api.obs.maemo.org" FRONTEND_PORT = 80 EXTERNAL_FRONTEND_HOST = "api.obs.maemo.org" BUGZILLA_HOST = "http://bugs.maemo.org/" DOWNLOAD_URL = "http://downloads.obs.maemo.org" </source> Do the same for /srv/www/obs/api/config/environments/production.rb. As soon your backend is not on the same machine as the api (frontend), change the following: <source lang='bash'> vi /srv/www/obs/api/config/environments/production.rb SOURCE_HOST = "src.obs.maemo.org" SOURCE_PORT = 5352 </source> Make sure TCP port 5352 is open on the firewall. Ensure lighttpd and obs ui helpers start: <source lang='bash'> chkconfig --add memcached chkconfig --add lighttpd chkconfig --add obsapidelayed chkconfig --add obswebuidelayed

rcmemcached start rclighttpd start rcobsapidelayed start rcobswebuidelayed start </source>

ligthttpd user and group need to be the owner of api and webui dirs (as well as log and tmp): <source lang='bash'> chown -R lighttpd.lighttpd /srv/www/obs/{api,webui} </source>

Installing the Workers

The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.

The same openSUSE Tools repository addition must be done for each worker. <source lang='bash'> cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref

  1. Accept the trust key

</source>

Install worker: <source lang='bash'> zypper in obs-worker quemu-svn mount-static bash-static </source> (mount-static and bash-static are needed on the worker for rpm cross-compile to work)

Edit the file /etc/sysconfig/obs-worker in order to point to correct repository server. <source lang='bash'> vi /etc/sysconfig/obs-worker OBS_SRC_SERVER="src.obs.maemo.org:5352" OBS_REPO_SERVERS="repo.obs.maemo.org:5252" OBS_VM_TYPE="none" </source>

Each worker has 16 CPUs, so 16 workers need to be started.

Start the worker service: <source lang='bash'> chkconfig --add obsworker

rcobsworker start </source>

Tuning

The lighttpd config was reduced to 3 api/webui child processes from 5.

Revise VG setup

pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part3 pvcreate /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part4

Now create the OBS VG for the host Xen worker:

vgcreate OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part1
vgextend OBS /dev/disk/by-id/dm-name-360060e8005478600000047860000398c_part2


lvcreate -L 20G -n cache OBS
mkfs -text4 /dev/OBS/cache
for i in 1 2 3 4 5 6 7 8 9 10
do
  lvcreate -L 4G -n worker_root$i OBS
  mkfs -text4 /dev/OBS/worker_root$i
  lvcreate -L 1G -n worker_swap$i OBS
  mkswap -f /dev/OBS/worker_swap$i
done

Note in 1.7.3 obsstoragesetup makes /OBS.worker/root1/root but obsworker looks for /OBS.worker/root_1/root (root1 vs root_1)


Also ensure the host has :

obs-worker (1.7.3)
qemu-svn

In order to run arm binaries in the Xen chroot we need to ensure the initrd inserts binfmt_misc in /etc/sysconfig/kernel, set

 DOMU_INITRD_MODULES="xennet xenblk binfmt_misc

then

mkinitrd -k vmlinuz-2.6.31.12-0.2-xen -i initrd-2.6.31.12-0.2-xen-binfmt -B

On the backend machine edit /usr/lib/build/xen.conf to use the correct initrd

Remove Power/PPC from monitor page

We don't target PPC, so best remove it from the build monitor page:

On FE in webui/app/views/monitor/_plots.rhtml

Use html comments to disable the Power line:

<!--<%= render :partial => 'graphs_arch', :locals => { :title => "Power", :prefix => 'ppc_' } %>-->

On FE webui/config/environment.rb

Change the MONITOR_IMAGEMAP section to:

MONITOR_IMAGEMAP = {.
      'pc_waiting' => [
        ["i586", 'waiting_i586'],
        ["x86_64", 'waiting_x86_64'] ],
      'pc_blocked' => [
        ["i586", 'blocked_i586' ],
        ["x86_64", 'blocked_x86_64'] ],
      'pc_workers' => [
        ["idle", 'idle_x86_64' ],
        ['building', 'building_x86_64' ] ],
#      'ppc_waiting' => [
#        ["ppc", 'waiting_ppc'],
#        ["ppc64", 'waiting_ppc64'] ],
#      'ppc_blocked' => [
#        ["ppc", 'blocked_ppc' ],
#        ["ppc64", 'blocked_ppc64'] ],
#      'ppc_workers' => [
#        ["idle", 'idle_ppc64' ],
#        ['building', 'building_ppc64' ] ],
      'arm_waiting' => [
        ["armv5", 'waiting_armv5el'],
        ["armv7", 'waiting_armv7el'] ],
      'arm_blocked' => [
        ["armv5", 'blocked_armv5el' ],
        ["armv7", 'blocked_armv7el'] ],
      'arm_workers' => [
        ["idle", 'idle_armv7el' ],
        ['building', 'building_armv7el' ] ],
    }