Improving the Daily Scrum

Few important points

  1. Time Start at: 10.00 AM (Not 10.02 AM); Scrum meetings wait for no one.
  2. Three Key Questions:
    1. “What did you do since the last meeting?”
    2. “What are you going to do until the next meeting?”
    3. “What impedes you from being more productive?”
  3. Concentrate on the second and third question, not on the first one.
  4. Standup means stand up, no sitting, really. except medical conditions.
  5. Stand in front of the visual progress artifact.
  6. Everybody should be present except medical conditions.
  7. No typing.  
  8. Daily standup is for synchronizing between the team members, not between Scrum Master and the team. If team members start behaving as they are reporting to Scrum Master, he can start literally looking at another person or even walk away a little. Such small tricks are often able to confirm that daily Scrum is for the team members and not for the Scrum Master

.

Load Balancer

Load balancers are used to increase capacity (concurrent users) and reliability of applications. They improve the overall performance of applications by decreasing the burden on servers associated with managing and maintaining application and network sessions, as well as by performing application-specific tasks.

Basically, a load balancer is a device that acts as a reverse proxy and distributes network or application traffic across a number of servers.

Load balancers are generally grouped into two categories:

1. Layer 4 and

2. Layer 7.

Layer 4 load balancers act upon data found in network and transport layer protocols (IP, TCP, FTP, UDP).

Layer 7 load balancers distribute requests based upon data found in application layer protocols such as HTTP.

Requests are received by both types of load balancers and they are distributed to a particular server based on a configured algorithm. Some industry standard algorithms are:

  • Round robin
  • Weighted round robin
  • Least connections
  • Least response time

Layer 7 load balancers can further distribute requests based on application specific data such as HTTP headers, cookies, or data within the application message itself, such as the value of a specific parameter.

Load balancers ensure reliability and availability by monitoring the “health” of applications and only sending requests to servers and applications that can respond in a timely manner.

 

Open Source Tools

Ultra Monkey

Ultra Monkey is a project to create load balanced and highly available network services. For example a cluster of web servers that appear as a single web server to end-users. The service may be for end-users across the world connected via the internet, or for enterprise users connected via an intranet.

More Details: http://www.ultramonkey.org/

Red Hat Cluster Suite

It is a high availability cluster software implementation from Linux leader Red Hat. It provide two type services:

  1. Application / Service Failover – Create n-node server clusters for failover of key applications and services
  2. IP Load Balancing – Load balance incoming IP network requests across a farm of servers

More Details:- http://www.redhat.com/cluster_suite/

Linux Virtual Server (LVS)

LVS is ultimate open source Linux load sharing and balancing software. You can easily build a high-performance and highly available server for Linux using this software.

More Details:- http://www.linux-vs.org/

The High Availability Linux Project

Linux-HA provides sophisticated high-availability (failover) capabilities on a wide range of platforms, supporting several tens of thousands of mission critical sites.

More Details: http://www.linux-ha.org/.

High Availability MySQL

Basic Guideline for High Availability MySQL

  1. Replication
  2. DRBD HA

We are discussing DRBD HA here. So first things to go is

DRBD HA

Following Steps are required:

  • Preparation of OS :
  • DRBD

Preparation OS :

We need to reserve a huge physical volume which would be later used as a DRBD volume.
Don’t specify any file system type.

fdisk /dev/sda

Should print:

The number of cylinders for this disk is set to 9729.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:

  1.   Software that runs at boot time (e.g., old versions of LILO)
  2.   Booting and partitioning software from other Oss (e.g., DOS FDISK, OS/2 FDISK)

Command (m for help): p

Disk /dev/sda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System
/dev/sda1 * 1 2611 20972826 83 Linux
/dev/sda2 2612 2872 2096482+ 82 Linux swap
/dev/sda3 2873 3003 1052257+ 8e Linux LVM
/dev/sda4 3004 9729 54026595 5 Extended
/dev/sda5 3004 9729 54026563+ 8e Linux LVM

We are going to use /dev/sda5 as a DRBD device.

DRBD Setup

On Server1 and Server2

yum -y install drbd
yum -y install kernel-module-drbd-2.6.9-42.ELsmp
modprobe drbd

DRBD Configuration:
On both servers:

vi /etc/drbd.conf

#
# please have a a look at the example configuration file in
# /usr/share/doc/drbd/drbd.conf
#
# Our MySQL share
resource db
{
protocol C;
incon-degr-cmd “echo ‘!DRBD! pri on incon-degr’ | wall ; sleep 60 ; halt -f”;
startup { wfc-timeout 0; degr-wfc-timeout 120; }
disk { on-io-error detach; } # or panic, …
syncer {
group 1;
rate 6M;
}

on server1 {
device /dev/drbd1;
disk /dev/sda5;
address 10.10.150.1:7789;
meta-disk internal;
}

on server2 {
device /dev/drbd1;
disk /dev/sda5;
address 10.10.150.2:7789;
meta-disk internal;
}
}

Start DRBD

On both servers:
drbdadm adjust db
On server1:
drbdsetup /dev/drbd1 primary –do-what-I-say
service drbd start
On server2:
service drbd start

DRBD Status

On both servers (see status):

service drbd status

On server1:
mkfs -j /dev/drbd1
tune2fs -c -1 -i 0 /dev/drbd1
mkdir /db
mount -o rw /dev/drbd1 /db

On server2:
mkdir /db

Test failover

For manual switchover (This wont be needed as HA will do this for you):

On primary server

umount /db
drbdadm secondary db

On secondary server
drbdadm primary db
service drbd status
mount -o rw /dev/drbd1 /db
df

This finishes DRBD part of it. You have created a DRBD mount which will be used as a data directory for your MySQL.

MySQL

  1. We can do an RPM based or a BINARY or a SOURCE compilation.
  2. IMPORTANT:(Crucial for failover) Heartbeat uses either LSB Resource Agents or OCF Resource Agents or Heartbeat Resource Agents to start and stop heartbeat resources. Here, MySQL,DRBD and IP are our heartbeat resources.
  3. Refer this page on Resource Agent
  4. As you are aware of it many *nix services are started using LSB Resource Agents. They are found in /etc/init.d
  5. A service is started/stopped using: /etc/init.d/servicename start/stop/status
  6. We should see to it that we have similar LSB Resource Agent for MySQL.
  7. In, source based installation it will be created in $PREFIX/share directory as mysql.server. $PREFIX is one you give during source compilation.
  8. Fix that script and copy it to /etc/init.d/
  9. In case of RPM based installation you will get LSB Resource Agent in place.
  10. End objective is that, MySQL should be up and running.
  11. Now comes the hurdle.
  12. Stop MySQL.
  13. Move your data directory to a directory on DRBD share.
  14. Later, create a softlink. Check Unix Command : ln
  15. This is how I would have done assuming my initial data directory was /home/mysql/data :

server1:

mkdir /db/mysql
NOTE: /db should be mounted to do this
mkdir /db/mysql/data
chown -R mysql /db/mysql/data
chgrp -R mysql /db/mysql/data
mv /home/mysql/data /db/mysql/data
ln -s /db/mysql/data /home/mysql/data

server2:

mv /home/mysql/data /tmp
ln -s /db/mysql/data /home/mysql/data

Now, start MySQL on server1. Create some sample database and table. Stop MySQL. Do a manual switchover of DRBD. Start MySQL on server2 and query for that table. It should work. But, this is of no use if you have to switchover manually every time. Now we are heading to HA.

HA

 Installation:

yum -y install gnutls*
yum -y install ipvsadm*
yum -y install heartbeat*

Configuration:

vi /etc/sysctl.conf and set net.ipv4.ip_forward = 1
vi /etc/sysctl.conf
# Controls IP packet forwarding
net.ipv4.ip_forward = 1
/sbin/chkconfig –level 2345 heartbeat on
/sbin/chkconfig –del ldirectord

We need to setup the following conf files on both machines:
a. vi /etc/ha.d/ha.cf

#/etc/ha.d/ha.cf content
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 30
warntime 10
initdead 120
udpport 694 #(If you have multiple HA setup in same network.. use different ports)
bcast eth0 # Linux
auto_failback on #(This will failback to server1 after it comes back)
ping 10.10.150.100 #(Your gateway IP)
apiauth ipfail gid=haclient uid=hacluster
node server1
node server2

On both machines:

b. vi /etc/ha.d/haresources
NOTE: Assuming 10.10.150.3 is virtual IP for your MySQL resource and mysqld is the LSB resource agent.

#/etc/ha.d/haresources content
server1 LVSSyncDaemonSwap::master IPaddr2::10.10.150.3/24/eth0 drbddisk::db Filesystem::/dev/drbd1::/db::ext3 mysqld

c. vi /etc/ha.d/authkeys
#/etc/ha.d/authkeys content
auth 2

2 sha1 YourSecretString
Now, make your authkeys secure:
chmod 600 /etc/ha.d/authkeys

Start:
On both machines(first on server1):
Stop MySQL.
Make sure MySQL does not start on system init.
For that:
/sbin/chkconfig –level 2345 MySQL off
/etc/init.d/heartbeat start
These commands will give you status about this LVS setup:
/etc/ha.d/resource.d/LVSSyncDaemonSwap master status
ip addr sh
/etc/init.d/heartbeat status
df
/etc/init.d/mysqld status

Access your HA-MySQL server like:
mysql -h10.10.150.3

Shutdown server1 to see MySQL up on server2.
Start server1 to see MySQL back on server1..