OpenSuse Build Service/Installation
Maemo OBS Cluster Installation Notes
Introduction
Maemo OBS Clusters are divided into 3 instances:
- Frontend – Containing Webclient, OBS API, Mysql server
- Backend/Repository Server – Where OBS-Server, dispatcher, scheduler and repository server are installed
- Workers – Installation of obs-worker package
This Installation Notes covers OBS installation and setup for version 1.7. You can always refer to latest documentation also available on-line: http://gitorious.org/opensuse/build-service/blobs/raw/master/dist/README.SETUP
Creating Xen VMs
Based on http://en.opensuse.org/Build_Service/KIWI/Cookbook <source lang='bash'> zypper ar http://download.opensuse.org/repositories/Virtualization:/Appliances/openSUSE_11.2/ Virtualization:Appliances zypper refresh zypper in kiwi kiwi-templates kiwi-desc-xenboot squashfs </source>
Create somne xen volumes <source lang='bash'> lvcreate -L 10G VG_data -n fe_root mkfs -text3 /dev/VG_data/fe_root lvcreate -L 2G VG_data -n fe_swap mkswap /dev/VG_data/fe_swap
lvcreate -L 10G VG_data -n be_root mkfs -text3 /dev/VG_data/be_root lvcreate -L 2G VG_data -n be_swap mkswap /dev/VG_data/be_swap
lvcreate -L 10G VG_data -n w1_root mkfs -text3 /dev/VG_data/w1_root lvcreate -L 2G VG_data -n w1_swap mkswap /dev/VG_data/w1_swap </source>
Prpeare an opensuse minimal image: <source lang='bash'> mkdir /data/11.2min kiwi --prepare suse-11.2-JeOS --root /data/11.2min/image-root --add-profile xenFlavour </source>
Copy to each of the VM root disks
<source lang='bash'>
mount /dev/VG_data/fe_root /mnt/lvm
rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/
umount /mnt/lvm
mount /dev/VG_data/be_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ umount /mnt/lvm
mount /dev/VG_data/w1_root /mnt/lvm rsync -HAXa /data/11.2min/image-root/ /mnt/lvm/ umount /mnt/lvm </source>
Configure some Xen VM configs <source lang='bash'> name='be' disk=['phy:/dev/VG_data/be_root,sda2,w', 'phy:/dev/VG_data/be_swap,sda1,w']
- vif=['ip=10.0.0.76,mac=00:16:3E:E3:D4:9C']
root='/dev/sda2 ro' memory='1024'
kernel='/boot/vmlinuz-2.6.31.12-0.2-xen' ramdisk='/boot/initrd-2.6.31.12-0.2-xen'
on_poweroff='destroy' on_reboot='restart' on_crash='restart'
extra='clocksource=jiffies' </source>
Installing the Frontend (WebUI and API)
Start with a minimal Suse install and then add Tools repository where OBS 1.7 is available. <source lang='bash'> cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref
- Accept the trust key
</source>
Install obs-api (It's going to install lighttpd webserver by dependency for you). <source lang='bash'> zypper in obs-api </source>
Setup Mysql
Mysql server needs to be installed and configured to start as deamon <source lang='bash'> chkconfig --add mysql rcmysql start </source> Setup a secure installation, if it's the first time starting mysql <source lang='bash'> /usr/bin/mysql_secure_installation </source> The frontend instance holds 2 applications, the api and the webui. Each one need a database created <source lang='bash'> mysql -u root -p mysql> create database api_production; mysql> create database webui_production; </source> Add obs user to handle these databases <source lang='bash'> GRANT all privileges
ON api_production.*
TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************';
GRANT all privileges
ON webui_production.*
TO 'obs'@'%', 'obs'@'localhost' IDENTIFIED BY '************';
FLUSH PRIVILEGES; </source> Configure your MySQL user and password in the "production:" section of the api config: <source lang='bash'> vi /srv/www/obs/api/config/database.yml
- change the production section
production:
adapter: mysql database: api_production username: obs password: ************
</source> Do the same for the webui. It's configured, by default to use sqlite, but since we're configuring the cluster for production environment, let's bind it to mysql:
<source lang='bash'> vi /srv/www/obs/webui/config/database.yml
- change the production section
production:
adapter: mysql database: webui_production username: obs password: ************
</source> Populate the database <source lang='bash'> cd /srv/www/obs/api/ RAILS_ENV="production" rake db:migrate
cd /srv/www/obs/webui/ RAILS_ENV="production" rake db:migrate </source> You can check the migration was successful verifying the “migrated” message at the end of each statement.
Setup and configure lighttpd for the api and webui
You need to setup the correct hostnames to where webui, api and repo server are going to point to <source lang='bash'> vi /etc/lighttpd/vhosts.d/obs.conf
- webui (first section)
$HTTP["host"] =~ "build.maemo.org" { ...
- api (second section)
$HTTP["host"] =~ "api.maemo.org" { …
- comment out whole the last section (3rd)
- It's going to be configured no Backend enabling the repository listing there
</source>
To enable these vhosts, make sure to uncomment the following in the 'custom includes' section at the bottom of /etc/lighttpd/lighttpd.conf: <source lang='bash'> vi /etc/lighttpd/lighttpd.conf
## custom includes like vhosts. ## #include "conf.d/config.conf" # following line uncommented as per # /usr/share/doc/packages/obs-api/README.SETUP include_shell "cat vhosts.d/*.conf"
</source> Also, the modules "mod_magnet", "mod_rewrite" and fastcgi need to be enabled by uncommenting the corresponding lines in /etc/lighttpd/modules.conf: <source lang='bash'> vi /etc/lighttpd/modules.conf
server.modules = (
"mod_access",
# "mod_alias",
# "mod_auth",
# "mod_evasive",
# "mod_redirect",
"mod_rewrite",
# "mod_setenv",
# "mod_usertrack",
)
## ## mod_magnet ## include "conf.d/magnet.conf"
## ## FastCGI (mod_fastcgi) ## include "conf.d/fastcgi.conf"
</source> You need also to configure /srv/www/obs/webui/config/environments/production.rb to point to correct server names: <source lang='bash'> vi /srv/www/obs/webui/config/environments/production.rb FRONTEND_HOST = "api.maemo.org" </source> Do the same for /srv/www/obs/api/config/environments/production.rb. As soon your backend is not on the same machine as the api (frontend), change the following: <source lang='bash'> vi /srv/www/obs/api/config/environments/production.rb SOURCE_HOST = "obs-be.maemo.org" SOURCE_PORT = 5352 </source> Make sure TCP port 5352 is open on the firewall. Restart lighttpd <source lang='bash'>
- first add it as deamon
chkconfig --add lighttpd rclighttpd start </source>
ligthttpd user and group need to be the onwer of api and webui dirs (as well as log and tmp): <source lang='bash'> chown -R lighttpd.lighttpd /srv/www/obs/{api,webui}/{log,tmp} </source>
Installing the Backend
On this host we need also to setup openSuse Tools repository: <source lang='bash'> cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref
- Accept the trust key
</source>
Install: <source lang='bash'> zypper in obs-server obs-signer obs-utils
- obs-server brings these other packages as dependency. This is just for you to notice which packages are needed for Backend installation
</source> Configure Scheduler architectures <source lang='bash'> vi /etc/sysconfig/obs-server OBS_SCHEDULER_ARCHITECTURES="i586 armv7el“ </source>
/usr/lib/obs/server/BSConfig.pm needs to point to correct server names corresponding to source server, where workers are going to download the source, and the repository server, where RPM repos are going to be shered to users. <source lang='bash'> vi /usr/lib/obs/server/BSConfig.pm
- change
my $hostname = "obs-be.maemo.org";
our $srcserver = "http://obs-srcsrv.maemo.org:5352"; our $reposerver = "http://obs-repo.maemo.org:5252"; </source>
Configure services as deamons <source lang='bash'> chkconfig --add obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher
- Check them
chkconfig -l obsrepserver obssrcserver obsscheduler obsdispatcher obspublisher </source> Start Services <source lang='bash'>
rcobsrepserver start rcobssrcserver start rcobsscheduler start rcobsdispatcher start rcobspublisher start
</source> For version 1.7 there are a new services. You can start them as well:
- obswarden
It checks if build hosts are dieing and cleans up hanging builds
- obssigner
It is used to sign packages via the obs-sign daemon. You need to configure it in BSConfig.pm before you can use it.
- obsservice
The is the source service daemon. OBS 1.7 just comes with a download service so far. This feature is considered to be experimental so far, but can be already extended with own services.
Install Lighttpd
lighttpd also needs to be available on backend server. This is required to provide directory listing on the repositories available on this server when an http/s request to maemo-repo is made through web ui.
<source lang='bash'> zypper in lighttpd </source>
Create a new file under /etc/lighttpd/vhosts.d/. It can be obs.conf as well, and add: <source lang='bash'> vi /etc/lighttpd/vhosts.d/obs.conf
$HTTP["host"] =~ "obs-repo.maemo.org" {
server.name = "obs-repo.maemo.org"
server.document-root = "/srv/obs/repos/" dir-listing.activate = "enable"
} </source>
To enable vhosts, remember to uncomment the following in the 'custom includes':
<source lang='bash'> vi /etc/lighttpd/lighttpd.conf
## custom includes like vhosts. ## #include "conf.d/config.conf" # following line uncommented as per # /usr/share/doc/packages/obs-api/README.SETUP include_shell "cat vhosts.d/*.conf"
</source>
Start lighttpd <source lang='bash'>
- first add it as deamon
chkconfig --add lighttpd rclighttpd start </source>
Installing the Workers
The other 14 hosts on the cluster are reserved to be used as workers, where package builds are going to place.
The same openSuse Tools repository addition must be done for each worker. <source lang='bash'> cd /etc/zypp/repos.d/; wget http://download.opensuse.org/repositories/openSUSE:/Tools/openSUSE_11.2/openSUSE:Tools.repo zypper ref
- Accept the trust key
</source>
Install worker: <source lang='bash'> zypper in obs-worker </source>
Edit the file /etc/sysconfig/obs-worker in order to point to correct repository server. <source lang='bash'> vi /etc/sysconfig/obs-worker OBS_SRC_SERVER="srcsrv.maemo.org:5352" OBS_REPO_SERVERS="obs-repo.maemo.org:5252" </source>
Each worker has 16 CPUs, so 16 workers need to be started.
<source lang='bash'>rcobsworker</source> uses curl to download the worker source code from backend server, therefore, for curl to work properly with the proxy setup on worker you must edit .curlrc (under users home dir) and set --noproxy "europe.nokia.com" in order to bypass proxy settings.
Start the worker service: <source lang='bash'> chkconfig --add obsworker
rcobsworker start </source>