2010년 3월 26일 금요일

LFS LiveCD and Jhalfs Howto

LFS LiveCD and Jhalfs Howto

Part 1 - Expedient instructions

Instant linux from scratch with LiveCD and Jhalfs automated build.
Don't use the "toram" option.
boot the LiveCD (lfslivecd-x86-6.3-r2160.iso)
Optionally, you can type startxfce4 and start a terminal.
mkdir /mnt/build_dir
chmod 777 /mnt/build_dir
mount the target partition on /mnt/build_dir
swapon /dev/your_swap_partition (if you have one)
useradd -s /bin/bash -m -k /dev/null lfs
echo "lfs ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
su - jhalfs
cd jhalfs-2.3.1
make
Select EXIT
Type yes
Wow! It works! Makes LFS-6.3!

If you did the optional startxfce4 then while it is building you can:
  • click the blue globe (seamonkey) and read along in the book
  • click on the black rectangle (terminal) and look under /mnt/build_dir/jhalfs/logs and view the logs as they are created
  • look under /mnt/build_dir/jhalfs/lfs-commands/chapter{05,06,07,08} and view the commands for each step

If you didn't do the optional startxfce4, you can still:
  • press CTRL-ALT-F2
  • startxfce4
  • do the above
  • press CTRL-ALT-F1 to return to the first screen



-----------------------------------------------------------------


Non-instant linux from scratch with LiveCD and manual build.
boot the LiveCD
startxfce4
click on the blue globe (seamonkey)
follow the book instructions


------------------------------------------------------------------


Part 2 - Enduring instructions

You may choose to build a version of LFS later than 6.3. The LiveCD is still excellent for this, but more advanced instuctions than the instant ones are needed.

These are the basic ingredients:
Lfs LiveCD or LiveCD.iso
Empty partition with at least 3GB (8-12 recommended)
Linux system

First Requirement - Choose an LFS Version to build
--------------------------------------------------
${LFV} will designate 'LFS Version" in all of the remainder of this document.
Wherever you see ${LFV} written, substitute the LFS Version desired.
For example: If you are building LFS 6.5, then wherever you see ${LFV}, substitute 6.5.
Think of ${LFV} as a symbol for 6.5

Example from below:
You are building LFS 6.5
You see written: mkdir /mnt/build_dir/lfs-${LFV}-xml
Replace it with: mkdir /mnt/build_dir/lfs-6.5-xml


An extraordinary case is the LFS 'SVN' or 'trunk' version.
In that case, one possibility is to substitute the date in YYYYMMDD format which can be provided by the command:
date +%Y%m%d

For example, if 'date +%Y%m%d' prints 20100121, then wherever ${LFV} is written, replace it with 20100121.

Example from below:
You are building trunk and the date is 20100121
You see written: mkdir /mnt/build_dir/lfs-${LFV}-xml
Replace it with: mkdir /mnt/build_dir/lfs-20100121-xml


The LiveCD provides a complete development environment. Jhalfs enables you to build a base system in a few steps. The most recent LiveCD has the outdated but stable LFS-6.3. This is not a problem. It is still a good LiveCD even if you want to build a newer system. The stable LFS-6.3 has a matching BLFS-6.3 and it builds in half the time of LFS-6.5. You can also choose to use a host linux system instead of the LiveCD. You don't have to use the LiveCD to use jhalfs or vice versa. You can also build manually which you should try at least once. It is your system and your choice.

You could, apparently, build a newer LiveCD from the subversion files but there is no need to do so unless you want to.
You could also, apparently, remaster a LiveCD if you want to (instuctions viewable once the LiveCD boots).

To get the fastest startup time, you can do these instructions while booting the LiveCD.iso from disk instead of using a CD drive. If you are going to boot from the LiveCD in a CD drive, "boot: linux toram" will give much better performace if you have enough memory.

To load the CD contents to RAM with the "toram" option, the minimum required amount of RAM is 512 MB. If you have less than 768 MB of RAM, add swap when the CD boot finishes -- a good idea anyway.

The LiveCD is available at ftp://anduin.linuxfromscratch.org/LFS-LiveCD/

lfslivecd-x86-6.3-r2160.iso includes the LFS-6.3 source files.
The lfslivecd-x86-6.3-r2160 CD might also be purchased from http://www.osdisc.com or http://www.linuxcd.org.

lfslivecd-x86-6.3-r2160-nosrc.iso is the same except it doesn't include the LFS-6.3 sources so it is smaller.

lfslivecd-x86-6.3-r2160-min.iso is the smallest and it would work too. The "min" one does not have any GUI and everything must be done at a text console. This one is only a good choice if you want to "use the force" and work in the darkness. That is how things will be on a newly built lfs system before the gui is built.

If you wish to automate the build with jhalfs:
Use jhalfs-2.3.1 to build LFS-6.3 and/or BLFS-6.3.
Use jhalfs-2.3.2 or later to build LFS/BLFS versions later than 6.3.
This is because some changes made after LFS/BLS 6.3 are not fully compatible with jhalfs-2.3.1.
Mixing the "not compatibles" may cause funny things to happen.
I'll just mention that the (trunk) "SVN" version of LFS is later than 6.3 and the (trunk) "SVN" version of jhalfs is later than 2.3.2.

Jhalfs has a BREAKPOINT feature which potentially allows some of the work to be done automatically and some of the work to be done manually.


Step 1 - Preparations to boot the lfs LiveCD from disk.
------------------------------------------------------

In these instructions, a directory named ISOS will be created and the LiveCD.iso will be downloaded and found there. If you run this disk_livecd script, it does the steps L1 - L5:

L1 - Remember the starting directory
-------------------------------
start_dir=$(pwd)

L2 - Make the directory ISOS
-----------------------
mkdir /ISOS

L3 - Get the LiveCD.iso
----------------------
cd /ISOS
wget -c ftp://anduin.linuxfromscratch.org/LFS-LiveCD/lfslivecd-x86-6.3-r2160.iso

L4 - Extract LiveCD kernel and initramfs
----------------------------------------
iso_dir="/ISOS"
live_cd="lfslivecd-x86-6.3-r2160.iso"
lc_kernel="${iso_dir}/lfs-live-linux"
lc_initrd="${iso_dir}/lfs-live-initramfs.gz"
mkdir -v -p /tmp/livecd &&
mount -v -t iso9660 -o loop \
${iso_dir}/${live_cd} \
/tmp/livecd &&
cd /tmp/livecd/boot/isolinux &&
install -v -m 644 linux ${lc_kernel} &&
install -v -m 644 initramfs_data.cpio.gz ${lc_initrd} &&
cd $start_dir
umount -v /tmp/livecd

L5 - Add LFS liveCD to grub configfile
------------------------------------
title LFS liveCD
root (${grub_drive},${grub_part})
kernel ${lc_kernel} rw root=iso:/dev/${disk}${partition}:${live_cd_path} rootfstype=${fstype} ${toram}
initrd ${lc_initrd}


Step 2 - Make target partition and mount it
-------------------------------------------
TP = Target Partition = [hs][a-f][0-15] = examples: sda1, sdb2, hdd10
If the target partition is /dev/TP
mke2fs -j -I 128 /dev/TP

mkdir /mnt/build_dir
mount /dev/TP /mnt/build_dir

NOTE: The optional switch, -I 128, restricts inode size to 128. This enables an older grub to boot the partition without complaining "Error 2: Bad file or directory type". The mke2fs in recent e2fsprogs defaults to 256. If the grub understands 256-byte inodes then omit the switch, -I 128. Also omit the switch if using ext4.

Step 3 - Make dirs for book and sources and jhalfs-2.3.2
-------------------------------------------------------
mkdir /mnt/build_dir/lfs-sources
mkdir /mnt/build_dir/lfs-${LFV}-xml
mkdir /mnt/build_dir/jhalfs-2.3.2

Note: This is done because you may wish to preload these if you have previously downloaded a copy.

Step 4 - Copy or download the appropriate jhalfs version
---------------------------------------------------------
cd /mnt/build_dir/jhalfs-2.3.2

if you already have jhalfs-2.3.2
  • cp -aT path_to_jhalfs-2.3.2 .
Otherwise
  • svn co svn://linuxfromscratch.org/ALFS/jhalfs/tags/2.3.2 .

Step 5 - Copy xml files to /mnt/build_dir
-----------------------------------------
cd /mnt/build_dir/lfs-${LFV}-xml

if you already have them
  • cp -aT path_to_xml_files .
Otherwize
  • svn co svn://linuxfromscratch.org/LFS/tags/${LFV}/BOOK .
Unless you are building trunk in which case
  • svn co svn://linuxfromscratch.org/LFS/trunk/BOOK .

Step 6 - Get sources ahead of time
----------------------------------
# Copy or wget sources

# Set your path to existing sources or "" if none:
path_to_sources="/sources"

# Get sources
cd /mnt/build_dir/lfs-${LFV}-xml
if [ $? -eq 0 ]; then
sudo make wget-list BASEDIR=.
cp wget-list /mnt/build_dir/lfs-sources
cd /mnt/build_dir/lfs-sources
while read line;do
sf=${line##*/}
if [ ! -f "${sf}" ]; then
if [ -n "$path_to_sources" ]; then
if [ -f "${path_to_sources}/${sf}" ]; then
cp -iv ${path_to_sources}/${sf} .
fi
fi
fi
if [ ! -f "${sf}" ]; then
wget -c $line
fi
done < wget-list
fi


Step 7 - Change permissions of /mnt/build_dir to 777
----------------------------------------------------
chmod 777 /mnt/build_dir

Note: This is done because the build user will need access.


Step 8 - Boot the LiveCD
------------------------
If using cdrom
  • Load cdrom in the drive
  • At the prompt 'boot:' type 'linux toram'
Otherwise if booting from disk
  • From the Grub Menu, Select 'LFS liveCD'

It will ask you to choose a timezone from a scroll menu.
I choose America/New_York which represents U.S.A. Eastern Time.
It will ask you to choose a locale from a scroll menu.
I choose English, USA, ISO-8859-1.
Press ENTER to start a virtual console.

swapon /dev/your_swap_partition (if you have one)
useradd -s /bin/bash -m -k /dev/null lfs
echo "lfs ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers

Step 9 - Prepare the graphical environment (optional)
----------------------------------------------------
You can start the LiveCD graphical environment with:
startxfce4

Often, it can be frustrating to get the graphical environment to start or to get acceptable resolution. Commonly, the default settings will give a resolution that is too low, meaning that everything is too huge to work with comfortably.

Changing the device driver to the generic "vesa" will work in many cases. This can be done manually or with the following sed command which edits /etc/X11/xorg.conf and under [Section "Device"], comments out the current Driver and inserts the line [Driver "vesa"].

sed -i /etc/X11/xorg.conf -e '
/Section "Device"/,/EndSection/ {
/Driver/s/^/#/
/EndSection/i\ \tDriver\t"vesa"
}'

An alternative approach is to edit /etc/X11/xorg.conf under [Section "Monitor"] and insert a proper [HorizSync h-H] line and/or a [VertRefresh v-V] line. The data for HorizSync and VertRefresh for your monitor might only be available on the original box. The HorizSync line is the more critical one.
WARNING: Using the wrong values might cause damage. Guessing is discouraged.

The printed on my box (which I saved) is Fh 31-80.
Sed could add that with:

sed -i -e '
/Section "Monitor"/,/EndSection/ {
/HorizSync/s/^/#/
/EndSection/i\ \tHorizSync\t31-80
}
' /etc/X11/xorg.conf

It happens that I can get higher resolution with Fh 31-64.
Sed could add that with:

sed -i /etc/X11/xorg.conf -e '
/Section "Monitor"/,/EndSection/ {
/HorizSync/s/^/#/
/EndSection/i\ \tHorizSync\t31-64
}'

NOTE: I have never seen sed documentation that the filename can precede the script, but I tried it and it worked for me. That makes it easier to use the UP-ARROW and edit, and repeat the command with a different range.

You don't need to get it perfect. You only get it good enough to work with. Even if you can't get the graphical environment to work, you can still build a system.

Step 10 - Exit the graphical environment test as root (optional)
----------------------------------------------------------------
If you entered the graphical environment as root, then exit it now.

Step 11 - Mount the build_dir
-----------------------------
TP = Target Partition = examples: sda1, sdb2, hdd10

mkdir /mnt/build_dir
mount /dev/TP /mnt/build_dir
chown -R jhalfs:jhalfs /mnt/build_dir/lfs-sources
chown -R jhalfs:jhalfs /mnt/build_dir/jhalfs-2.3.2

Step 12 - Change user to jhalfs
-------------------------------
You should be at the text console with the # (pound sign) prompt:
su - jhalfs

Step 13 - Start graphical environment as jhalfs (optional)
---------------------------------------------------------
You should be at the text console with the $ (dollar sign) prompt:
echo Xft.dpi: 96 >> .Xresources
startxfce4
start a terminal

Step 14 - Change directory to jhalfs-2.3.2 location
---------------------------------------------------
cd /mnt/build_dir/jhalfs-2.3.2

Step 15 - Run jhalfs 'make' to configure jhalfs
-----------------------------------------------
make
























Step 16 - Change directory to the build location
------------------------------------------------
cd /mnt/build_dir/jhalfs

Step 17 - Run jhalfs 'make' to build the system
-----------------------------------------------
make

Ubuntu 32-bit, 32-bit PAE, 64-bit Kernel Benchmarks

Ubuntu 32-bit, 32-bit PAE, 64-bit Kernel Benchmarks

Published on December 30, 2009
Written by Michael Larabel
Page 1 of 5
Discuss This Article

Coming up in our forums was a testing request to compare the performance of Linux between using 32-bit, 32-bit PAE, and 64-bit kernels. This is coming after Linus Torvalds has spoke of 25% performance differences between kernels using CONFIG_HIGHMEM4G and those without this option that allows 32-bit builds to address up to 4GB of physical RAM on a system. We decided to compare the performance of the 32-bit, 32-bit PAE, and 64-bit kernels on a modern desktop system and here are the results.

For this comparison we used Ubuntu 9.10 on a Lenovo ThinkPad T61 notebook running an Intel Core 2 Duo T9300 processor, 4GB of system memory, a 100GB Hitachi HTS7220 SATA HDD, and a NVIDIA Quadro NVS 140M. We were using the Ubuntu-supplied kernels that are based off the Linux 2.6.31 kernel in Ubuntu Karmic. Other packages that were maintained included GNOME 2.28.1, X Server 1.6.4, NVIDIA 195.22 display driver, GCC 4.4.1, and we were using the default EXT4 file-system with all other defaults. With Ubuntu to properly address 4GB or greater of system memory you need to use a PAE kernel as the Physical Address Extension support through the kernel's high-mem configuration options are not enabled in the default 32-bit kernels. CONFIG_HIGHMEM4G is enabled in the default Ubuntu kernel, but the Ubuntu PAE kernel uses CONFIG_HIGHMEM64G (and other build options) for handling up to 64GB of system memory. Of course, with 64-bit addressing there is not this greater than 4GB RAM limitation. Though even with a 32-bit non-PAE kernel the system will only report 3GB of system memory by default due to 1GB of that being reserved for kernel virtual addresses while the 3GB is available to user-space addresses.

The only differences in the kernel configuration between Ubuntu's PAE and non-PAE 32-bit kernels are enabling the CONFIG_X86_CMPXCHG64, CONFIG_HIGHMEM64G instead of CONFIG_HIGHMEM4G, CONFIG_X86_PAE, CONFIG_ARCH_PHYS_ADDR_T_64BIT, CONFIG_PHYS_ADDR_T_64BIT, CONFIG_I2O_EXT_ADAPTEC_DMA64, and disabling CONFIG_ASYNC_TX_DMA. The rest of the kernel configuration is the same. The Linux kernel also requires that the CPU itself supports PAE, but these days that is practically all Intel and AMD processors.

Among the tests we ran on the three Linux 2.6.31 kernels with the Phoronix Test Suite were OpenArena, Apache, PostgreSQL, Bullet, C-Ray, Gcrypt, GnuPG, GraphicsMagick, timed MAFFT alignment, John The Ripper, OpenSSL, x264, and PostMark.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/1.png<!--[endif]-->

With the ioquake3-powered OpenArena game there were virtually no performance differences between the 32-bit, 32-bit PAE, and 64-bit kernels. We had ran other OpenGL-powered tests too through the Phoronix Test Suite and found no significant differences, so we are just sharing one set of numbers in this article to avoid repetition.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/2.png<!--[endif]-->

While the different kernels had not affected the gaming performance with our Core 2 Duo laptop running with 4GB of system memory, the Apache performance was significantly affected. The stock Ubuntu 32-bit kernel had managed to 473 requests per second while the PAE kernel dropped just slightly with its 467 request average, but meanwhile the 64-bit support was many times faster with its 7,989 requests per second count.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/3.png<!--[endif]-->

With the PostgreSQL benchmark, the 32-bit PAE kernel ended up actually doing slightly better than the non-PAE kernel, but the 64-bit Ubuntu kernel came out in front with over a 10% lead.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/4.png<!--[endif]-->

One of the new test profiles in the Phoronix Test Suite is for the Bullet Physics Engine, which we fired up with this round of kernel benchmarking. The two 32-bit kernels led to roughly the same performance with the "3000 Fall" test through Bullet while the 64-bit kernel was nearly 20% faster for this heavy physics processing.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/5.png<!--[endif]-->

The 64-bit kernel remained the best option with Bullet when looking at the 136 Ragdolls test.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/6.png<!--[endif]-->

We compared the ray-tracing performance with the three kernels using the multi-threaded C-Ray. Again, the PAE kernel had not led to any major differences, but switching from 32-bit to 64-bit is where the real speed improvements can be found. The C-Ray finished in less than half the time with the 64-bit kernel compared to the two 32-bit kernels.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/7.png<!--[endif]-->

Using x86_64 Linux also led to a terrific speed-up with the Gcrypt library when looking at the CAMELLIA256-ECB cipher.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/8.png<!--[endif]-->

With GnuPG, there was also an improvement with the 64-bit kernel but not much of a difference between the non-PAE and PAE Ubuntu kernels.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/9.png<!--[endif]-->

The OpenMP-powered GraphicsMagick test sped up by nearly 40% with the 64-bit kernel, but the PAE kernel caused no performance change in either direction.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/10.png<!--[endif]-->

For looking at the computational biology performance, we looked at MAFFT and found it to also perform best under the 64-bit kernel while the 32-bit PAE kernel did not end up having any impact in this CPU-focused test.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/11.png<!--[endif]-->

With the Blowfish performance as measured by John The Ripper, the 64-bit kernel had a 54% speed advantage.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/12.png<!--[endif]-->

No change in the OpenSSL performance between either the 32-bit kernels, but the 64-bit kernel on the Intel Core 2 Duo "Penryn" was approaching two and a half times the speed of the 32-bit kernel.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/13.png<!--[endif]-->

With video processing using x264, the PAE kernel performance just dropped ever so slightly while the 64-bit kernel was again the fastest option.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_32_pae/14.png<!--[endif]-->

Only a very small drop in performance can be found with the PAE kernel in the PostMark disk test, but the 64-bit kernel was immensely faster.

In the fourteen tests for this article we did not find using Ubuntu's 32-bit PAE kernel to have a dramatic performance impact whether it be positive or negative. Granted, we were using just 4GB of system memory that is common to many desktops, but if using 8GB, 16GB, or even a greater memory capacity the performance penalties are perhaps higher. By far though exhibiting the best performance was the Ubuntu 64-bit kernel that often ended up being leaps and bounds better than the 32-bit kernel. Unless you have technical or business reasons for not migrating to 64-bit Linux with compatible hardware, there is no reason to stick around with a 32-bit kernel and worrying about physical address extension. If you want to run your own kernel benchmarks, give the Phoronix Test Suite a try that offers more than 120 test profiles and 60 test suites.

Discuss this article in our forums or IRC channel or email the author. You can also follow our content on social networks like Facebook, Identi.ca, and Twitter. Subscribe to Phoronix Premium to view our content without advertisements, view entire articles on a single page, and other benefits.

Benchmarks With Early Fedora 13 Numbers

Benchmarks With Early Fedora 13 Numbers

Published on January 15, 2010
Written by
Michael Larabel
Page 1 of 7
Discuss This Article

With Ubuntu 10.04 Alpha 2 having made it out yesterday, we couldn't resist but to run some new benchmarks of the Lucid Lynx after our original tests last month found Ubuntu 10.04 was off to a poor performance start. In some areas the performance of Ubuntu 10.04 LTS Alpha 2 remains lower than in Ubuntu 9.10 -- largely due to performance regressions upstream in the Linux kernel -- but we have also included some very early performance numbers from Fedora 13.

While Ubuntu's Lucid Lynx has already had two development releases, Red Hat has not yet put out any development releases for Fedora 13. The first and only alpha release of Fedora 13 is planned for the middle of February while a beta release will come at the start of April and then the final release will enter the world towards the middle of May, assuming there are no delays. With that said, Fedora 13 is still heavily in development and will certainly change a lot between now and then (especially with how closely they follow some packages and their upstream involvement), but we have included benchmark numbers from the 2010-01-13 nightly compose desktop image of Fedora Rawhide. Beyond being an early snapshot of Fedora 13, Red Hat enables numerous debugging options within their Rawhide kernel and other packages that are then disabled prior to the official release. These debugging options can impair the system's performance, but as with all Fedora and Ubuntu releases, we will be back with many more benchmarks throughout the development cycle. These Fedora 13 numbers should just be looked at for reference purposes.

<!--[if !vml]-->Description:   http://www.phoronix.net/image.php?id=ubuntu_lucid_alpha2&image=fedora_13_early_med<!--[endif]-->

We used the 64-bit versions of Ubuntu 10.04 Alpha 2 and Fedora 13 (2010-01-13), which were compared to the stable version of Ubuntu 9.10 (x86_64). Ubuntu 9.10 uses the Linux 2.6.31 kernel, GNOME 2.28.1, X Server 1.6.4, and GCC 4.4.1. Ubuntu 10.04 Alpha 2 ups the package versions to the Linux 2.6.32 kernel, GNOME 2.29.4, X Server 1.7.4 RC2, and GCC 4.4.3. Our January 13 Rawhide snapshot contained the Linux 2.6.32 kernel, GNOME 2.29.4, X Server 1.7.3, and GCC 4.4.2. Both Ubuntu and Fedora use the EXT4 file-system by default and all three distributions were tested with their default settings and options. The NVIDIA 190.53 display driver was installed on Ubuntu and Fedora to provide 3D acceleration support for the NVIDIA Quadro graphics hardware that was used during testing.

The hardware used for testing was a Lenovo ThinkPad T61 notebook with an Intel Core 2 Duo T9300 processor, 4GB of system memory, a 100GB Hitachi HTS72201 hard drive, and a NVIDIA Quadro NVS 140M 512MB graphics processor. The Phoronix Test Suite software was used for carrying out all of these tests autonomously and in a fully repeatable manner. The test profiles included Lightsmark, Nexuiz, World of Padman, 1080p H.264 video playback, Apache, PostgreSQL, C-Ray, 7-Zip, x264, IOzone, PostMark, Threaded I/O Tester, John The Ripper, Gcrypt, GnuPG, and our custom battery-power-usage test.

On the following pages are our benchmarks comparing Ubuntu 9.10, Ubuntu 10.04 Alpha 2, and the very early look at the Fedora 13 performance.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/1.png<!--[endif]-->

NVIDIA's binary driver had not worked properly with the Fedora Rawhide packages so the OpenGL / video tests were only carried out under Ubuntu 9.10 and 10.04 Alpha 2. The NVIDIA 190.53 driver was used in all cases to eliminate any binary driver differences and to just show any impact that other changes within the Linux stack had on the gaming / graphics performance. With Lightsmark, Ubuntu 10.04 Alpha 2 wound up being about 18% faster than the 9.10 release.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/2.png<!--[endif]-->

The popular Nexuiz game had ran at effectively the same speed between 9.10 and 10.04 Alpha 2.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/3.png<!--[endif]-->

There were no performance differences either with the ioquake3-powered World of Padman.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/4.png<!--[endif]-->

When looking at the CPU usage with our video-cpu-usage test profile using the X-Video decode interface, the CPU usage had deviated between the Karmic Koala and Lucid Lynx, but in the end, the averages ended up being close to each other.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/5.png<!--[endif]-->

The Apache test had troubles with Fedora 13, but here it shows the Ubuntu 10.04 Alpha 2 performance to be significantly worse off than Ubuntu 9.10. This may partially be due to the upstream EXT4 performance regressions in the Linux kernel that we have extensively talked about, which would also cause the Apache performance in Fedora 13 to suffer in comparison to Fedora 12.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/6.png<!--[endif]-->

PostgreSQL's performance continues to suffer dramatically under Ubuntu 10.04 LTS and it is not expected that it will change at all for this next Ubuntu release. This major drop in the number of transactions being carried out per second is due to an EXT4 file-system change designed to provide better data safety but with a significant performance penalty. This matter is talked about in Autonomously Finding Performance Regressions In The Linux Kernel. The PostgreSQL performance also suffers in Fedora 13 and any other Linux distributions using the Linux 2.6.32 kernel. Even with Fedora 13 being much earlier into its development cycle and carrying some debugging options by default (along with using SELinux), Fedora 13 Rawhide performed quite closely to Ubuntu 10.04 Alpha 2 in this test.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/7.png<!--[endif]-->

The C-Ray ray-tracing performance was close between all three distributions with no winner.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/8.png<!--[endif]-->

Ubuntu 10.04 Alpha 2 and our Fedora 13 snapshot both performed better than Ubuntu 9.10 at running the 7-Zip compression speed test. At this point in the Lucid Lynx development cycle it is producing approximately 15% more MIPS than under the Karmic Koala with the same hardware.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/9.png<!--[endif]-->

Ubuntu 10.04 Alpha 2 and Fedora 13 are also outperforming Ubuntu 9.10 when it comes to the x264 video encoding performance. The advantage here for Ubuntu 10.04 was +18%. Fedora 13 2010-01-13 wasn't quite as fast as Ubuntu 10.04 Alpha 2, but it's nothing to fret about considering the debugging options and that Fedora 13 hasn't even reached an alpha state yet.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/10.png<!--[endif]-->

The IOzone 8GB write performance was the same between Ubuntu 9.10 and Ubuntu 10.04 Alpha 2, but Fedora 13 (2010-01-13) was noticeably lower: 66 vs. 52 MB/s.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/11.png<!--[endif]-->

Ubuntu 10.04 Alpha 2 regressed when it came to the 8GB read performance in IOzone, but Fedora 13 was reading at even a slower rate.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/12.png<!--[endif]-->

The PostMark disk performance between Ubuntu 9.10 and 10.04 Alpha 2 was close while Fedora 13 was behind, but again given the debugging options used during the development cycle and its pre-alpha state we aren't worrying too much.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/13.png<!--[endif]-->

Ubuntu 10.04 Alpha 2 was slower than Ubuntu 9.10 when it came to random writes via the Threaded I/O Tester.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/14.png<!--[endif]-->

There was not anything too interesting with the Blowfish numbers produced by John The Ripper.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/15.png<!--[endif]-->

The numbers were also close for the CAMELLIA256-ECB Cipher with the Gcrypt library.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/16.png<!--[endif]-->

Again, no major regressions.

<!--[if !vml]-->Description:   http://www.phoronix.com/data/img/results/ubuntu_lucid_alpha2/17.png<!--[endif]-->

Looking at the battery power usage for this notebook when we switched it to run off of its 6-cell battery for this test, Ubuntu 10.04 Alpha 2 was more energy efficient than its predecessor. On average Ubuntu 10.04 Alpha 2 was consuming one Watt less, which happened as less energy was being consumed while the Lenovo ThinkPad T61 was idling but before the test had signaled the display to turn off via DPMS, which looks like Ubuntu 10.04 is just being more aggressive with dimming the display when on battery power and idling.

Overall, there are both good and bad performance improvements for Ubuntu 10.04 LTS Alpha 2 in relation to Ubuntu 9.10. Most of the negative regressions are attributed to the EXT4 file-system losing some of its performance charm. With using a pre-alpha snapshot of Fedora 13 and the benchmark results just being provided for reference purposes, we will hold off on looking into greater detail at this next Red Hat Linux update until it matures. You can run your own tests though if you wish using our open-source Phoronix Test Suite benchmarking platform.

Discuss this article in our forums or IRC channel or email the author. You can also follow our content on social networks like Facebook, Identi.ca, and Twitter. Subscribe to Phoronix Premium to view our content without advertisements, view entire articles on a single page, and other benefits.