<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.yoctoproject.org/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gzhai</id>
	<title>Yocto Project - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.yoctoproject.org/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Gzhai"/>
	<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/Special:Contributions/Gzhai"/>
	<updated>2026-04-07T15:07:25Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.5</generator>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4492</id>
		<title>Build Appliance Design</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4492"/>
		<updated>2012-01-10T06:01:18Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page holds the design for the Yocto Project build appliance.&lt;br /&gt;
&lt;br /&gt;
=== Usage Model ===&lt;br /&gt;
&lt;br /&gt;
This feature is designed to make the Yocto Project much more appealing to the developer who wants to check Yocto out but may not have a recent (and supported) Linux distro installed with all of the proxies set up correctly.&lt;br /&gt;
&lt;br /&gt;
The developer will download a virtual image and boot it. This image is a Linux OS which will allow the user to do a build, boot the resulting Linux in an emulator. This gives a quick experience with the system without fear of dependencies being missing. (This is needed because of the general difficulty in having something as complex as the Yocto Project be totally compatible with every conceivable Linux system.&lt;br /&gt;
&lt;br /&gt;
It is a non-goal that a developer would continue to use this appliance for all day-to-day development tasks.&lt;br /&gt;
&lt;br /&gt;
=== Goals ===&lt;br /&gt;
&lt;br /&gt;
# Required: Total size of the image must not exceed 100 Mbytes, to make it feasible to download the image.&lt;br /&gt;
# Preferred: Have a second, larger image which includes all source and sstate-cache preinstalled, but may be much larger.&lt;br /&gt;
# Preferred: VMWare ESX image. VMWare is known to correctly build whereas recent versions of Virtual Box and others are not.&lt;br /&gt;
# Required: Must have Linux plus all prerequisite packages installed to make a build work.&lt;br /&gt;
# Preferred: Generate the OS in the appliance with Yocto. (Thus, make it a self-hosted build appliance)&lt;br /&gt;
# Preferred: When the image boots, it boots up Hob and also has a terminal for launching QEMU or deploying the image&lt;br /&gt;
# Preferred: In addition to Hob, there is GUI support for deploying the image on a board and / or boot it into QEMU&lt;br /&gt;
&lt;br /&gt;
=== Design Notes ===&lt;br /&gt;
&lt;br /&gt;
First step is to build a non-graphical image that can provide a user with the needed tools to correctly build an image&lt;br /&gt;
&lt;br /&gt;
Provide a simple X-Desktop with the HOB (pyGTK based) and a terminal (X-Term)&lt;br /&gt;
&lt;br /&gt;
Create an APP or extend HOB to support deployment of images, &lt;br /&gt;
 - this could be to a USB Device&lt;br /&gt;
   - HDD&lt;br /&gt;
   - USB Mem Stick&lt;br /&gt;
   - SD Card or similar&lt;br /&gt;
 - Burn a CD/DVD is available&lt;br /&gt;
 - Other deployment option &lt;br /&gt;
   - Network to real hardware (talk to Darren/Tom)?&lt;br /&gt;
&lt;br /&gt;
=== Plan ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.&lt;br /&gt;
&amp;lt;del&amp;gt;Build an qemu image, in which &amp;quot;bitbake core-image-minimal&amp;quot; work&amp;lt;/del&amp;gt;&lt;br /&gt;
* [P1][D2] check saul&#039;s branch to see how close it meet the goal - Dexuan/Edwin&lt;br /&gt;
* [P1][D3, depends on checking result] identify required host recipes and port them to yocto including&lt;br /&gt;
bitbake, wegt... Dexuan/Edwin&lt;br /&gt;
Use n450 to speed up debug process, and nfs for source/sstate at this stage&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.&lt;br /&gt;
&amp;lt;del&amp;gt;Improve self-hosted performance&amp;lt;/del&amp;gt; - &#039;&#039;&#039;done on KVM except pass-through, lower priority as vmware performance is acceptable&#039;&#039;&#039;&lt;br /&gt;
* [P2][D2] pre-install another disk image for source/sstate - Edwin&lt;br /&gt;
* [P2][D5] build performance improvement/test: &amp;lt;del&amp;gt;enabling KVM, SMP, virtio&amp;lt;/del&amp;gt; or device pass-through for network/disk - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.&lt;br /&gt;
&amp;lt;del&amp;gt;Integrate hob&amp;lt;/del&amp;gt; - &#039;&#039;&#039;patch already in poky master&#039;&#039;&#039;&lt;br /&gt;
* [P1][D5] minimal X system and required LIB to start terminal - Edwin&lt;br /&gt;
* [P1][D5] integrate hob - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;4.&lt;br /&gt;
Transfer to vmware image - &#039;&#039;&#039;almost done&#039;&#039;&#039;&lt;br /&gt;
* [P2][D2] &amp;lt;del&amp;gt;try vmware workstation&amp;lt;/del&amp;gt; - Dexuan&lt;br /&gt;
* [P2][D3] disk image format translating from qemu to vmware - Dexuan &#039;&#039;&#039;WIP&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;5.&lt;br /&gt;
Reduce image size - &#039;&#039;&#039;image size is low priority, performance is the key&#039;&#039;&#039;&lt;br /&gt;
* [P2][D5] &amp;lt;del&amp;gt;identify and remove unnecessary recipes&amp;lt;/del&amp;gt; - Edwin&lt;br /&gt;
* [P2][D5] tune features for big recipes, like kernel/glibc to reduce image size &amp;amp; improve performance - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6.&lt;br /&gt;
Deploying image - &#039;&#039;&#039;Dexuan will update this&#039;&#039;&#039;&lt;br /&gt;
* [P1][D10] creating live hdd image/or ISO - Dexuan&lt;br /&gt;
* [P2][D10] extend HOB to deploy image - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;7&lt;br /&gt;
Documentation &lt;br /&gt;
* [P2][D2] Readme also include instructions to setup sstate image - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;8&lt;br /&gt;
Others&lt;br /&gt;
* [P2][D2] one recipe to install source in the the self-hosted-image.(build from another new disk image) - Edwin&lt;br /&gt;
* [P2] Fix &amp;quot;Multiple X provider&amp;quot; - Edwin&lt;br /&gt;
* [P2] Fix the PATH issue to find all utility - Edwin&lt;br /&gt;
* [P2] hicolor issue - Dexuan&lt;br /&gt;
* [P2] check the disk pass-through in vmware, ask for ESC license to check performance - Edwin&lt;br /&gt;
&lt;br /&gt;
=== KVM Performance ===&lt;br /&gt;
&lt;br /&gt;
See the following Page for details: [[BKM:_improve_qemu_performance]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4303</id>
		<title>Build Appliance Design</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4303"/>
		<updated>2011-12-15T11:53:59Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page holds the design for the Yocto Project build appliance.&lt;br /&gt;
&lt;br /&gt;
=== Usage Model ===&lt;br /&gt;
&lt;br /&gt;
This feature is designed to make the Yocto Project much more appealing to the developer who wants to check Yocto out but may not have a recent (and supported) Linux distro installed with all of the proxies set up correctly.&lt;br /&gt;
&lt;br /&gt;
The developer will download a virtual image and boot it. This image is a Linux OS which will allow the user to do a build, boot the resulting Linux in an emulator. This gives a quick experience with the system without fear of dependencies being missing. (This is needed because of the general difficulty in having something as complex as the Yocto Project be totally compatible with every conceivable Linux system.&lt;br /&gt;
&lt;br /&gt;
It is a non-goal that a developer would continue to use this appliance for all day-to-day development tasks.&lt;br /&gt;
&lt;br /&gt;
=== Goals ===&lt;br /&gt;
&lt;br /&gt;
# Required: Total size of the image must not exceed 100 Mbytes, to make it feasible to download the image.&lt;br /&gt;
# Preferred: Have a second, larger image which includes all source and sstate-cache preinstalled, but may be much larger.&lt;br /&gt;
# Preferred: VMWare ESX image. VMWare is known to correctly build whereas recent versions of Virtual Box and others are not.&lt;br /&gt;
# Required: Must have Linux plus all prerequisite packages installed to make a build work.&lt;br /&gt;
# Preferred: Generate the OS in the appliance with Yocto. (Thus, make it a self-hosted build appliance)&lt;br /&gt;
# Preferred: When the image boots, it boots up Hob and also has a terminal for launching QEMU or deploying the image&lt;br /&gt;
# Preferred: In addition to Hob, there is GUI support for deploying the image on a board and / or boot it into QEMU&lt;br /&gt;
&lt;br /&gt;
=== Design Notes ===&lt;br /&gt;
&lt;br /&gt;
First step is to build a non-graphical image that can provide a user with the needed tools to correctly build an image&lt;br /&gt;
&lt;br /&gt;
Provide a simple X-Desktop with the HOB (pyGTK based) and a terminal (X-Term)&lt;br /&gt;
&lt;br /&gt;
Create an APP or extend HOB to support deployment of images, &lt;br /&gt;
 - this could be to a USB Device&lt;br /&gt;
   - HDD&lt;br /&gt;
   - USB Mem Stick&lt;br /&gt;
   - SD Card or similar&lt;br /&gt;
 - Burn a CD/DVD is available&lt;br /&gt;
 - Other deployment option &lt;br /&gt;
   - Network to real hardware (talk to Darren/Tom)?&lt;br /&gt;
&lt;br /&gt;
=== Plan ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.&lt;br /&gt;
&amp;lt;del&amp;gt;Build an qemu image, in which &amp;quot;bitbake core-image-minimal&amp;quot; work&amp;lt;/del&amp;gt;&lt;br /&gt;
* [P1][D2] check saul&#039;s branch to see how close it meet the goal - Dexuan/Edwin&lt;br /&gt;
* [P1][D3, depends on checking result] identify required host recipes and port them to yocto including&lt;br /&gt;
bitbake, wegt... Dexuan/Edwin&lt;br /&gt;
Use n450 to speed up debug process, and nfs for source/sstate at this stage&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.&lt;br /&gt;
Improve self-hosted performance - &#039;&#039;&#039;done on KVM except pass-through, lower priority as vmware performance is acceptable&#039;&#039;&#039;&lt;br /&gt;
* [P2][D2] pre-install another disk image for source/sstate - Edwin&lt;br /&gt;
* [P2][D5] build performance improvement/test: &amp;lt;del&amp;gt;enabling KVM, SMP, virtio&amp;lt;/del&amp;gt; or device pass-through for network/disk - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.&lt;br /&gt;
&amp;lt;del&amp;gt;Integrate hob&amp;lt;/del&amp;gt; - &#039;&#039;&#039;patch will be out soon&#039;&#039;&#039;&lt;br /&gt;
* [P1][D5] minimal X system and required LIB to start terminal - Edwin&lt;br /&gt;
* [P1][D5] integrate hob - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;4.&lt;br /&gt;
Transfer to vmware image&lt;br /&gt;
* [P2][D2] &amp;lt;del&amp;gt;try vmware workstation&amp;lt;/del&amp;gt; - Dexuan&lt;br /&gt;
* [P2][D3] disk image format translating from qemu to vmware - Dexuan &#039;&#039;&#039;WIP&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;5.&lt;br /&gt;
Reduce image size - &#039;&#039;&#039;image size is low priority, performance is the key&#039;&#039;&#039;&lt;br /&gt;
* [P2][D5] &amp;lt;del&amp;gt;identify and remove unnecessary recipes&amp;lt;/del&amp;gt; - Edwin&lt;br /&gt;
* [P2][D5] tune features for big recipes, like kernel/glibc to reduce image size &amp;amp; improve performance - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6.&lt;br /&gt;
Deploying image - &#039;&#039;&#039;Dexuan will update this&#039;&#039;&#039;&lt;br /&gt;
* [P1][D10] creating live hdd image/or ISO - Dexuan&lt;br /&gt;
* [P2][D10] extend HOB to deploy image - Dexuan&lt;br /&gt;
&lt;br /&gt;
=== KVM Performance ===&lt;br /&gt;
&lt;br /&gt;
See the following Page for details: [[BKM:_improve_qemu_performance]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4302</id>
		<title>Build Appliance Design</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4302"/>
		<updated>2011-12-15T11:38:41Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page holds the design for the Yocto Project build appliance.&lt;br /&gt;
&lt;br /&gt;
=== Usage Model ===&lt;br /&gt;
&lt;br /&gt;
This feature is designed to make the Yocto Project much more appealing to the developer who wants to check Yocto out but may not have a recent (and supported) Linux distro installed with all of the proxies set up correctly.&lt;br /&gt;
&lt;br /&gt;
The developer will download a virtual image and boot it. This image is a Linux OS which will allow the user to do a build, boot the resulting Linux in an emulator. This gives a quick experience with the system without fear of dependencies being missing. (This is needed because of the general difficulty in having something as complex as the Yocto Project be totally compatible with every conceivable Linux system.&lt;br /&gt;
&lt;br /&gt;
It is a non-goal that a developer would continue to use this appliance for all day-to-day development tasks.&lt;br /&gt;
&lt;br /&gt;
=== Goals ===&lt;br /&gt;
&lt;br /&gt;
# Required: Total size of the image must not exceed 100 Mbytes, to make it feasible to download the image.&lt;br /&gt;
# Preferred: Have a second, larger image which includes all source and sstate-cache preinstalled, but may be much larger.&lt;br /&gt;
# Preferred: VMWare ESX image. VMWare is known to correctly build whereas recent versions of Virtual Box and others are not.&lt;br /&gt;
# Required: Must have Linux plus all prerequisite packages installed to make a build work.&lt;br /&gt;
# Preferred: Generate the OS in the appliance with Yocto. (Thus, make it a self-hosted build appliance)&lt;br /&gt;
# Preferred: When the image boots, it boots up Hob and also has a terminal for launching QEMU or deploying the image&lt;br /&gt;
# Preferred: In addition to Hob, there is GUI support for deploying the image on a board and / or boot it into QEMU&lt;br /&gt;
&lt;br /&gt;
=== Design Notes ===&lt;br /&gt;
&lt;br /&gt;
First step is to build a non-graphical image that can provide a user with the needed tools to correctly build an image&lt;br /&gt;
&lt;br /&gt;
Provide a simple X-Desktop with the HOB (pyGTK based) and a terminal (X-Term)&lt;br /&gt;
&lt;br /&gt;
Create an APP or extend HOB to support deployment of images, &lt;br /&gt;
 - this could be to a USB Device&lt;br /&gt;
   - HDD&lt;br /&gt;
   - USB Mem Stick&lt;br /&gt;
   - SD Card or similar&lt;br /&gt;
 - Burn a CD/DVD is available&lt;br /&gt;
 - Other deployment option &lt;br /&gt;
   - Network to real hardware (talk to Darren/Tom)?&lt;br /&gt;
&lt;br /&gt;
=== Plan ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.&lt;br /&gt;
&amp;lt;del&amp;gt;Build an qemu image, in which &amp;quot;bitbake core-image-minimal&amp;quot; work&amp;lt;/del&amp;gt;&lt;br /&gt;
* [P1][D2] check saul&#039;s branch to see how close it meet the goal - Dexuan/Edwin&lt;br /&gt;
* [P1][D3, depends on checking result] identify required host recipes and port them to yocto including&lt;br /&gt;
bitbake, wegt... Dexuan/Edwin&lt;br /&gt;
Use n450 to speed up debug process, and nfs for source/sstate at this stage&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.&lt;br /&gt;
Improve self-hosted performance - &#039;&#039;&#039;done on KVM except pass-through, lower priority as vmware performance is acceptable&#039;&#039;&#039;&lt;br /&gt;
* [P2][D2] pre-install another disk image for source/sstate - Edwin&lt;br /&gt;
* [P2][D5] build performance improvement/test: &amp;lt;del&amp;gt;enabling KVM, SMP, virtio&amp;lt;/del&amp;gt; or device pass-through for network/disk - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.&lt;br /&gt;
&amp;lt;del&amp;gt;Integrate hob&amp;lt;/del&amp;gt; - &#039;&#039;&#039;patch will be out soon&#039;&#039;&#039;&lt;br /&gt;
* [P1][D5] minimal X system and required LIB to start terminal - Edwin&lt;br /&gt;
* [P1][D5] integrate hob - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;4.&lt;br /&gt;
Transfer to vmware image&lt;br /&gt;
* [P2][D2] &amp;lt;del&amp;gt;try vmware workstation&amp;lt;/del&amp;gt; - Dexuan&lt;br /&gt;
* [P2][D3] disk image format translating from qemu to vmware - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;5.&lt;br /&gt;
Reduce image size - &#039;&#039;&#039;image size is low priority, performance is the key&#039;&#039;&#039;&lt;br /&gt;
* [P2][D5] &amp;lt;del&amp;gt;identify and remove unnecessary recipes&amp;lt;/del&amp;gt; - Edwin&lt;br /&gt;
* [P2][D5] tune features for big recipes, like kernel/glibc to reduce image size &amp;amp; improve performance - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6.&lt;br /&gt;
Deploying image&lt;br /&gt;
* [P1][D10] creating live hdd image/or ISO - Dexuan&lt;br /&gt;
* [P2][D10] extend HOB to deploy image - Dexuan&lt;br /&gt;
&lt;br /&gt;
=== KVM Performance ===&lt;br /&gt;
&lt;br /&gt;
See the following Page for details: [[BKM:_improve_qemu_performance]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4301</id>
		<title>Build Appliance Design</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4301"/>
		<updated>2011-12-15T11:34:47Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page holds the design for the Yocto Project build appliance.&lt;br /&gt;
&lt;br /&gt;
=== Usage Model ===&lt;br /&gt;
&lt;br /&gt;
This feature is designed to make the Yocto Project much more appealing to the developer who wants to check Yocto out but may not have a recent (and supported) Linux distro installed with all of the proxies set up correctly.&lt;br /&gt;
&lt;br /&gt;
The developer will download a virtual image and boot it. This image is a Linux OS which will allow the user to do a build, boot the resulting Linux in an emulator. This gives a quick experience with the system without fear of dependencies being missing. (This is needed because of the general difficulty in having something as complex as the Yocto Project be totally compatible with every conceivable Linux system.&lt;br /&gt;
&lt;br /&gt;
It is a non-goal that a developer would continue to use this appliance for all day-to-day development tasks.&lt;br /&gt;
&lt;br /&gt;
=== Goals ===&lt;br /&gt;
&lt;br /&gt;
# Required: Total size of the image must not exceed 100 Mbytes, to make it feasible to download the image.&lt;br /&gt;
# Preferred: Have a second, larger image which includes all source and sstate-cache preinstalled, but may be much larger.&lt;br /&gt;
# Preferred: VMWare ESX image. VMWare is known to correctly build whereas recent versions of Virtual Box and others are not.&lt;br /&gt;
# Required: Must have Linux plus all prerequisite packages installed to make a build work.&lt;br /&gt;
# Preferred: Generate the OS in the appliance with Yocto. (Thus, make it a self-hosted build appliance)&lt;br /&gt;
# Preferred: When the image boots, it boots up Hob and also has a terminal for launching QEMU or deploying the image&lt;br /&gt;
# Preferred: In addition to Hob, there is GUI support for deploying the image on a board and / or boot it into QEMU&lt;br /&gt;
&lt;br /&gt;
=== Design Notes ===&lt;br /&gt;
&lt;br /&gt;
First step is to build a non-graphical image that can provide a user with the needed tools to correctly build an image&lt;br /&gt;
&lt;br /&gt;
Provide a simple X-Desktop with the HOB (pyGTK based) and a terminal (X-Term)&lt;br /&gt;
&lt;br /&gt;
Create an APP or extend HOB to support deployment of images, &lt;br /&gt;
 - this could be to a USB Device&lt;br /&gt;
   - HDD&lt;br /&gt;
   - USB Mem Stick&lt;br /&gt;
   - SD Card or similar&lt;br /&gt;
 - Burn a CD/DVD is available&lt;br /&gt;
 - Other deployment option &lt;br /&gt;
   - Network to real hardware (talk to Darren/Tom)?&lt;br /&gt;
&lt;br /&gt;
=== Plan ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.&lt;br /&gt;
&amp;lt;del&amp;gt;Build an qemu image, in which &amp;quot;bitbake core-image-minimal&amp;quot; work&amp;lt;/del&amp;gt;&lt;br /&gt;
* [P1][D2] check saul&#039;s branch to see how close it meet the goal - Dexuan/Edwin&lt;br /&gt;
* [P1][D3, depends on checking result] identify required host recipes and port them to yocto including&lt;br /&gt;
bitbake, wegt... Dexuan/Edwin&lt;br /&gt;
Use n450 to speed up debug process, and nfs for source/sstate at this stage&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.&lt;br /&gt;
Improve self-hosted performance - &#039;&#039;&#039;done on KVM except pass-through, lower priority as vmware performance is acceptable&#039;&#039;&#039;&lt;br /&gt;
* [P2][D2] pre-install another disk image for source/sstate - Edwin&lt;br /&gt;
* [P2][D5] build performance improvement/test: &amp;lt;del&amp;gt;enabling KVM, SMP, virtio&amp;lt;/del&amp;gt; or device pass-through for network/disk - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.&lt;br /&gt;
&amp;lt;del&amp;gt;Integrate hob&amp;lt;/del&amp;gt; - &#039;&#039;&#039;patch will be out soon&#039;&#039;&#039;&lt;br /&gt;
* [P1][D5] minimal X system and required LIB to start terminal - Edwin&lt;br /&gt;
* [P1][D5] integrate hob - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;4.&lt;br /&gt;
Transfer to vmware image&lt;br /&gt;
* [P2][D2] try vmware workstation - Dexuan&lt;br /&gt;
* [P2][D3] disk image format translating from qemu to vmware - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;5.&lt;br /&gt;
Reduce image size&lt;br /&gt;
* [P2][D5] identify and remove unnecessary recipes - Edwin&lt;br /&gt;
* [P2][D5] tune features for big recipes, like kernel/glibc to reduce image size - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6.&lt;br /&gt;
Deploying image&lt;br /&gt;
* [P1][D10] creating live hdd image/or ISO - Dexuan&lt;br /&gt;
* [P2][D10] extend HOB to deploy image - Dexuan&lt;br /&gt;
&lt;br /&gt;
=== KVM Performance ===&lt;br /&gt;
&lt;br /&gt;
See the following Page for details: [[BKM:_improve_qemu_performance]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4300</id>
		<title>Build Appliance Design</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Build_Appliance_Design&amp;diff=4300"/>
		<updated>2011-12-15T11:31:33Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Plan */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page holds the design for the Yocto Project build appliance.&lt;br /&gt;
&lt;br /&gt;
=== Usage Model ===&lt;br /&gt;
&lt;br /&gt;
This feature is designed to make the Yocto Project much more appealing to the developer who wants to check Yocto out but may not have a recent (and supported) Linux distro installed with all of the proxies set up correctly.&lt;br /&gt;
&lt;br /&gt;
The developer will download a virtual image and boot it. This image is a Linux OS which will allow the user to do a build, boot the resulting Linux in an emulator. This gives a quick experience with the system without fear of dependencies being missing. (This is needed because of the general difficulty in having something as complex as the Yocto Project be totally compatible with every conceivable Linux system.&lt;br /&gt;
&lt;br /&gt;
It is a non-goal that a developer would continue to use this appliance for all day-to-day development tasks.&lt;br /&gt;
&lt;br /&gt;
=== Goals ===&lt;br /&gt;
&lt;br /&gt;
# Required: Total size of the image must not exceed 100 Mbytes, to make it feasible to download the image.&lt;br /&gt;
# Preferred: Have a second, larger image which includes all source and sstate-cache preinstalled, but may be much larger.&lt;br /&gt;
# Preferred: VMWare ESX image. VMWare is known to correctly build whereas recent versions of Virtual Box and others are not.&lt;br /&gt;
# Required: Must have Linux plus all prerequisite packages installed to make a build work.&lt;br /&gt;
# Preferred: Generate the OS in the appliance with Yocto. (Thus, make it a self-hosted build appliance)&lt;br /&gt;
# Preferred: When the image boots, it boots up Hob and also has a terminal for launching QEMU or deploying the image&lt;br /&gt;
# Preferred: In addition to Hob, there is GUI support for deploying the image on a board and / or boot it into QEMU&lt;br /&gt;
&lt;br /&gt;
=== Design Notes ===&lt;br /&gt;
&lt;br /&gt;
First step is to build a non-graphical image that can provide a user with the needed tools to correctly build an image&lt;br /&gt;
&lt;br /&gt;
Provide a simple X-Desktop with the HOB (pyGTK based) and a terminal (X-Term)&lt;br /&gt;
&lt;br /&gt;
Create an APP or extend HOB to support deployment of images, &lt;br /&gt;
 - this could be to a USB Device&lt;br /&gt;
   - HDD&lt;br /&gt;
   - USB Mem Stick&lt;br /&gt;
   - SD Card or similar&lt;br /&gt;
 - Burn a CD/DVD is available&lt;br /&gt;
 - Other deployment option &lt;br /&gt;
   - Network to real hardware (talk to Darren/Tom)?&lt;br /&gt;
&lt;br /&gt;
=== Plan ===&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1.&lt;br /&gt;
&amp;lt;del&amp;gt;Build an qemu image, in which &amp;quot;bitbake core-image-minimal&amp;quot; work&amp;lt;/del&amp;gt;&lt;br /&gt;
* [P1][D2] check saul&#039;s branch to see how close it meet the goal - Dexuan/Edwin&lt;br /&gt;
* [P1][D3, depends on checking result] identify required host recipes and port them to yocto including&lt;br /&gt;
bitbake, wegt... Dexuan/Edwin&lt;br /&gt;
Use n450 to speed up debug process, and nfs for source/sstate at this stage&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2.&lt;br /&gt;
Improve self-hosted performance&lt;br /&gt;
* [P2][D2] pre-install another disk image for source/sstate - Edwin&lt;br /&gt;
* [P2][D5] build performance improvement/test: &amp;lt;del&amp;gt;enabling KVM, SMP, virtio&amp;lt;/del&amp;gt; or device pass-through for network/disk - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3.&lt;br /&gt;
Integrate hob&lt;br /&gt;
* [P1][D5] minimal X system and required LIB to start terminal - Edwin&lt;br /&gt;
* [P1][D5] integrate hob - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;4.&lt;br /&gt;
Transfer to vmware image&lt;br /&gt;
* [P2][D2] try vmware workstation - Dexuan&lt;br /&gt;
* [P2][D3] disk image format translating from qemu to vmware - Dexuan&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;5.&lt;br /&gt;
Reduce image size&lt;br /&gt;
* [P2][D5] identify and remove unnecessary recipes - Edwin&lt;br /&gt;
* [P2][D5] tune features for big recipes, like kernel/glibc to reduce image size - Edwin&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;6.&lt;br /&gt;
Deploying image&lt;br /&gt;
* [P1][D10] creating live hdd image/or ISO - Dexuan&lt;br /&gt;
* [P2][D10] extend HOB to deploy image - Dexuan&lt;br /&gt;
&lt;br /&gt;
=== KVM Performance ===&lt;br /&gt;
&lt;br /&gt;
See the following Page for details: [[BKM:_improve_qemu_performance]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4228</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4228"/>
		<updated>2011-12-06T13:43:14Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Enable SMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Link title]]= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== Enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;qemu built from yocto may have some bug that only 1 vcpu is truly used by guest(You can use top -d1 inside guest to see only 100% CPU usage). Pls. switch to the qemu in the linux distribution, say ubuntu, as one workaround.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;br /&gt;
&lt;br /&gt;
== Create network environment ==&lt;br /&gt;
Default qemu in yocto talk with host via simple 192.168.7.X, you can use bridge to make the guest act as one true machine, E.g download source code from outside. Pls. see http://www.linux-kvm.org/page/Networking#public_bridge for details&lt;br /&gt;
&lt;br /&gt;
== Enable virtio block and network device ==&lt;br /&gt;
Virtio (block/network) device is a para-virtualized device for kvm guest. It is different from normal emulated hard drive, because it is simply faster. &lt;br /&gt;
* enable yocto kernel configuration&lt;br /&gt;
 +CONFIG_VIRTIO=y&lt;br /&gt;
 +CONFIG_VIRTIO_PCI=y&lt;br /&gt;
 +CONFIG_VIRTIO_BALLOON=y&lt;br /&gt;
 +CONFIG_VIRTIO_RING=y&lt;br /&gt;
 +CONFIG_VIRTIO_NET=y&lt;br /&gt;
 +CONFIG_VIRTIO_BLK=y&lt;br /&gt;
* change qemu block device parameter from &amp;quot;-hda &amp;lt;your_disk_image&amp;quot; to &#039;&#039;&#039;-drive file=&amp;lt;your_disk_image&amp;gt;,if=virtio&#039;&#039;&#039;&lt;br /&gt;
* modify qemu NIC parameter: change the -net nic option to include &#039;&#039;&#039;model=virtio&#039;&#039;&#039;&lt;br /&gt;
Following is one example&lt;br /&gt;
 -net nic,model=virtio,vlan=0 -net tap,vlan=0,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown -drive file=/home/source.img,if=virtio -drive file=/home/build.img,if=virtio&lt;br /&gt;
&lt;br /&gt;
== Enable VTd to for NIC/Block device ==&lt;br /&gt;
Enable VTd with KVM can directed assign PCI device(like NIC or USB disk) to guest, then achieve almost native performance. Pls. make sure your chipset and bios support VTd.&lt;br /&gt;
Details is to be coming soon:)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4227</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4227"/>
		<updated>2011-12-06T13:41:35Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Enable SMP */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Link title]]= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== Enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
----&lt;br /&gt;
qemu built from yocto may have some bug that only 1 vcpu is truly used by guest(You can use top -d1 inside guest to see only 100% CPU usage). Pls. switch to the qemu in the linux distribution, say ubuntu, as one workaround.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;br /&gt;
&lt;br /&gt;
== Create network environment ==&lt;br /&gt;
Default qemu in yocto talk with host via simple 192.168.7.X, you can use bridge to make the guest act as one true machine, E.g download source code from outside. Pls. see http://www.linux-kvm.org/page/Networking#public_bridge for details&lt;br /&gt;
&lt;br /&gt;
== Enable virtio block and network device ==&lt;br /&gt;
Virtio (block/network) device is a para-virtualized device for kvm guest. It is different from normal emulated hard drive, because it is simply faster. &lt;br /&gt;
* enable yocto kernel configuration&lt;br /&gt;
 +CONFIG_VIRTIO=y&lt;br /&gt;
 +CONFIG_VIRTIO_PCI=y&lt;br /&gt;
 +CONFIG_VIRTIO_BALLOON=y&lt;br /&gt;
 +CONFIG_VIRTIO_RING=y&lt;br /&gt;
 +CONFIG_VIRTIO_NET=y&lt;br /&gt;
 +CONFIG_VIRTIO_BLK=y&lt;br /&gt;
* change qemu block device parameter from &amp;quot;-hda &amp;lt;your_disk_image&amp;quot; to &#039;&#039;&#039;-drive file=&amp;lt;your_disk_image&amp;gt;,if=virtio&#039;&#039;&#039;&lt;br /&gt;
* modify qemu NIC parameter: change the -net nic option to include &#039;&#039;&#039;model=virtio&#039;&#039;&#039;&lt;br /&gt;
Following is one example&lt;br /&gt;
 -net nic,model=virtio,vlan=0 -net tap,vlan=0,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown -drive file=/home/source.img,if=virtio -drive file=/home/build.img,if=virtio&lt;br /&gt;
&lt;br /&gt;
== Enable VTd to for NIC/Block device ==&lt;br /&gt;
Enable VTd with KVM can directed assign PCI device(like NIC or USB disk) to guest, then achieve almost native performance. Pls. make sure your chipset and bios support VTd.&lt;br /&gt;
Details is to be coming soon:)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4226</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4226"/>
		<updated>2011-12-06T13:38:02Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Link title]]= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== Enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;br /&gt;
&lt;br /&gt;
== Create network environment ==&lt;br /&gt;
Default qemu in yocto talk with host via simple 192.168.7.X, you can use bridge to make the guest act as one true machine, E.g download source code from outside. Pls. see http://www.linux-kvm.org/page/Networking#public_bridge for details&lt;br /&gt;
&lt;br /&gt;
== Enable virtio block and network device ==&lt;br /&gt;
Virtio (block/network) device is a para-virtualized device for kvm guest. It is different from normal emulated hard drive, because it is simply faster. &lt;br /&gt;
* enable yocto kernel configuration&lt;br /&gt;
 +CONFIG_VIRTIO=y&lt;br /&gt;
 +CONFIG_VIRTIO_PCI=y&lt;br /&gt;
 +CONFIG_VIRTIO_BALLOON=y&lt;br /&gt;
 +CONFIG_VIRTIO_RING=y&lt;br /&gt;
 +CONFIG_VIRTIO_NET=y&lt;br /&gt;
 +CONFIG_VIRTIO_BLK=y&lt;br /&gt;
* change qemu block device parameter from &amp;quot;-hda &amp;lt;your_disk_image&amp;quot; to &#039;&#039;&#039;-drive file=&amp;lt;your_disk_image&amp;gt;,if=virtio&#039;&#039;&#039;&lt;br /&gt;
* modify qemu NIC parameter: change the -net nic option to include &#039;&#039;&#039;model=virtio&#039;&#039;&#039;&lt;br /&gt;
Following is one example&lt;br /&gt;
 -net nic,model=virtio,vlan=0 -net tap,vlan=0,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown -drive file=/home/source.img,if=virtio -drive file=/home/build.img,if=virtio&lt;br /&gt;
&lt;br /&gt;
== Enable VTd to for NIC/Block device ==&lt;br /&gt;
Enable VTd with KVM can directed assign PCI device(like NIC or USB disk) to guest, then achieve almost native performance. Pls. make sure your chipset and bios support VTd.&lt;br /&gt;
Details is to be coming soon:)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4225</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4225"/>
		<updated>2011-12-06T13:34:30Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Enable virtio block and network device */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Link title]]= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== Enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;br /&gt;
&lt;br /&gt;
== Create network environment ==&lt;br /&gt;
Default qemu in yocto talk with host via simple 192.168.7.X, you can use bridge to make the guest act as one true machine, E.g download source code from outside. Pls. see http://www.linux-kvm.org/page/Networking#public_bridge for details&lt;br /&gt;
&lt;br /&gt;
== Enable virtio block and network device ==&lt;br /&gt;
Virtio (block/network) device is a para-virtualized device for kvm guest. It is different from normal emulated hard drive, because it is simply faster. &lt;br /&gt;
* enable yocto kernel configuration&lt;br /&gt;
 +CONFIG_VIRTIO=y&lt;br /&gt;
 +CONFIG_VIRTIO_PCI=y&lt;br /&gt;
 +CONFIG_VIRTIO_BALLOON=y&lt;br /&gt;
 +CONFIG_VIRTIO_RING=y&lt;br /&gt;
 +CONFIG_VIRTIO_NET=y&lt;br /&gt;
 +CONFIG_VIRTIO_BLK=y&lt;br /&gt;
* change qemu block device parameter from &amp;quot;-hda &amp;lt;your_disk_image&amp;quot; to &#039;&#039;&#039;-drive file=&amp;lt;your_disk_image&amp;gt;,if=virtio&#039;&#039;&#039;&lt;br /&gt;
* modify qemu NIC parameter: change the -net nic option to include &#039;&#039;&#039;model=virtio&#039;&#039;&#039;&lt;br /&gt;
Following is one example&lt;br /&gt;
 -net nic,model=virtio,vlan=0 -net tap,vlan=0,ifname=tap0,script=/etc/qemu-ifup,downscript=/etc/qemu-ifdown -drive file=/home/source.img,if=virtio -drive file=/home/build.img,if=virtio&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4224</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4224"/>
		<updated>2011-12-06T13:26:54Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Link title]]= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== Enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;br /&gt;
&lt;br /&gt;
== Create network environment ==&lt;br /&gt;
Default qemu in yocto talk with host via simple 192.168.7.X, you can use bridge to make the guest act as one true machine, E.g download source code from outside. Pls. see http://www.linux-kvm.org/page/Networking#public_bridge for details&lt;br /&gt;
&lt;br /&gt;
== Enable virtio block and network device ==&lt;br /&gt;
Virtio (block/network) device is a para-virtualized device for kvm guest. It is different from normal emulated hard drive, because it is simply faster. &lt;br /&gt;
* enable yocto kernel configuration&lt;br /&gt;
 +CONFIG_VIRTIO=y&lt;br /&gt;
 +CONFIG_VIRTIO_PCI=y&lt;br /&gt;
 +CONFIG_VIRTIO_BALLOON=y&lt;br /&gt;
 +CONFIG_VIRTIO_RING=y&lt;br /&gt;
 +CONFIG_VIRTIO_NET=y&lt;br /&gt;
 +CONFIG_VIRTIO_BLK=y&lt;br /&gt;
* change qemu block device parameter from &amp;quot;-hda &amp;lt;your_disk_image&amp;quot; to &#039;&#039;&#039;-drive file=&amp;lt;your_disk_image&amp;gt;,if=virtio&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4223</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4223"/>
		<updated>2011-12-06T13:24:44Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Link title]]= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== Enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;br /&gt;
&lt;br /&gt;
== Create network environment ==&lt;br /&gt;
Default qemu in yocto talk with host via simple 192.168.7.X, you can use bridge to make the guest act as one true machine, E.g download source code from outside. Pls. see http://www.linux-kvm.org/page/Networking#public_bridge for details&lt;br /&gt;
&lt;br /&gt;
== Enable virtio block and network device ==&lt;br /&gt;
Virtio (block/network) device is a para-virtualized device for kvm guest. It is different from normal emulated hard drive, because it is simply faster. &lt;br /&gt;
# enable yocto kernel configuration&lt;br /&gt;
 +CONFIG_VIRTIO=y&lt;br /&gt;
 +CONFIG_VIRTIO_PCI=y&lt;br /&gt;
 +CONFIG_VIRTIO_BALLOON=y&lt;br /&gt;
 +CONFIG_VIRTIO_RING=y&lt;br /&gt;
 +CONFIG_VIRTIO_NET=y&lt;br /&gt;
 +CONFIG_VIRTIO_BLK=y&lt;br /&gt;
# change qemu block device parameter from &amp;quot;-hda &amp;lt;your_disk_image&amp;quot; to &#039;&#039;&#039;-drive file=&amp;lt;your_disk_image&amp;gt;,if=virtio&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4222</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4222"/>
		<updated>2011-12-06T13:14:25Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Link title]]= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== Enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;br /&gt;
&lt;br /&gt;
== Create network environment ==&lt;br /&gt;
Default qemu in yocto talk with host via simple 192.168.7.X, you can use bridge to make the guest act as one true machine, E.g download source code from outside. Pls. see http://www.linux-kvm.org/page/Networking#public_bridge for details&lt;br /&gt;
&lt;br /&gt;
== Enable virtio block and network device ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4221</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4221"/>
		<updated>2011-12-06T08:31:45Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;br /&gt;
Following steps to create a new disk image(10G) with ext3 file system:&lt;br /&gt;
 $ sudo dd if=/dev/zero of=b.img bs=1G count=10&lt;br /&gt;
 $ sudo losetup /dev/loop0 b.img&lt;br /&gt;
 $ sudo fdisk /dev/loop0&lt;br /&gt;
  Then create one new single partition&lt;br /&gt;
 $ sudo mkfs.ext3 /dev/loop0&lt;br /&gt;
  to create ext3 file system&lt;br /&gt;
 $ sudo losetup -d /dev/loop0&lt;br /&gt;
&lt;br /&gt;
append this disk image to qemu via &#039;&#039;&#039;-hdb&#039;&#039;&#039;(need further changes if need virtio block device)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4220</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4220"/>
		<updated>2011-12-06T06:44:01Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive, it may requires huge disk space. You can set up NFS server or add extra disk image, which is preferred as NFS server introduce extra network workload.&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4218</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4218"/>
		<updated>2011-12-06T06:01:36Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;br /&gt;
&lt;br /&gt;
== Add extra disk space ==&lt;br /&gt;
If workload inside qemu is disk-intensive.&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4217</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4217"/>
		<updated>2011-12-06T05:58:40Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== enable SMP ==&lt;br /&gt;
If workload inside qemu is CPU-intensive, you can enable smp:&lt;br /&gt;
* enable smp configuration in yocto kernel&lt;br /&gt;
* enable smp option for qemu, like &#039;&#039;&#039;-smp 4&#039;&#039;&#039; to give 4 vcpu to guest.&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4216</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4216"/>
		<updated>2011-12-06T05:53:09Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &#039;&#039;&#039;-m 2048&#039;&#039;&#039; (2G).&lt;br /&gt;
&lt;br /&gt;
== enable SMP ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4215</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4215"/>
		<updated>2011-12-06T05:52:15Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
If you have processor with VTx support, you can enable KVM so that virtualization rather than emulation is used for performance. Pls. refer following:&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
== Add big memory ==&lt;br /&gt;
Sometimes, workload inside qemu requires huge memory. Fail to do this lead performance drop. So pls. add enough memory to qemu via &amp;quot;-m 2048&amp;quot;(2G).&lt;br /&gt;
&lt;br /&gt;
== enable SMP ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4214</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4214"/>
		<updated>2011-12-06T05:43:44Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= improve qemu performance =&lt;br /&gt;
Qemu in yocto is emulator, and is slow when running huge task inside. E.g. running yocto build inside qemu. This article illustrate possible performance improvement to accelerate it&lt;br /&gt;
&lt;br /&gt;
== Enable KVM ==&lt;br /&gt;
&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4213</id>
		<title>BKM: improve qemu performance</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BKM:_improve_qemu_performance&amp;diff=4213"/>
		<updated>2011-12-06T05:38:37Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: Created page with &amp;quot;* How to enable KVM for Poky qemu  test&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
&lt;br /&gt;
test&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BSPs&amp;diff=4212</id>
		<title>BSPs</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BSPs&amp;diff=4212"/>
		<updated>2011-12-06T05:38:01Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[Yocto BSP One-Stop Shop (Documentation Overview, Getting Started, FAQs, and more)]]&lt;br /&gt;
* [[Poky Contributions]]&lt;br /&gt;
* [[Poky NFS Root]]&lt;br /&gt;
* [[Wind River Kernel]]&lt;br /&gt;
* [[Merging Packages from OpenEmbedded]]&lt;br /&gt;
* [[How to turn on Poky Audio on Netbook]]&lt;br /&gt;
* [[How to Build Target Application in the Host Machine]]&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;br /&gt;
* [[Transcript: from git checkout to qemu desktop]]&lt;br /&gt;
* [[Transcript: from git checkout to meta-intel BSP]]&lt;br /&gt;
* [[BKM: starting a new BSP]]&lt;br /&gt;
* [[Transcript: creating one generic Atom BSP from another]]&lt;br /&gt;
* [[BKM: improve qemu performance]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=777</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=777"/>
		<updated>2011-02-17T14:53:53Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* know issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;br /&gt;
&lt;br /&gt;
If there is no setscene tasks running, sstate cache can&#039;t be used because sstate function got broken or some environment value changes invalidate sstate cache. You can use following tools to check any changes between sstate cache and new produced one.&lt;br /&gt;
&lt;br /&gt;
  #bitbake-diffsigs /mnt/sstate/sstate-packagename-checksum1-tgz.siginfo builddir/sstate-cache/sstate-packagename-checksum2-tgz.siginfo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== know issues ==&lt;br /&gt;
* Various build distributions may lead to checksum mismatch&lt;br /&gt;
* New changeset may invalidate whole sstate cache occasionally&lt;br /&gt;
* Native package’s sstate cache can’t be reused among 32bit and 64bit host&lt;br /&gt;
* Missing sstate file on http/ftp server cause wget hang for a long time due to the retries and timeout&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=776</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=776"/>
		<updated>2011-02-17T14:51:55Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* know issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;br /&gt;
&lt;br /&gt;
If there is no setscene tasks running, sstate cache can&#039;t be used because sstate function got broken or some environment value changes invalidate sstate cache. You can use following tools to check any changes between sstate cache and new produced one.&lt;br /&gt;
&lt;br /&gt;
  #bitbake-diffsigs /mnt/sstate/sstate-packagename-checksum1-tgz.siginfo builddir/sstate-cache/sstate-packagename-checksum2-tgz.siginfo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== know issues ==&lt;br /&gt;
* Various build distributions may lead to checksum mismatch&lt;br /&gt;
* New changeset may invalidate whole sstate cache occasionally&lt;br /&gt;
*   Bbclass&lt;br /&gt;
 *  Base packages (libc, …)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=775</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=775"/>
		<updated>2011-02-17T14:51:18Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* know issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;br /&gt;
&lt;br /&gt;
If there is no setscene tasks running, sstate cache can&#039;t be used because sstate function got broken or some environment value changes invalidate sstate cache. You can use following tools to check any changes between sstate cache and new produced one.&lt;br /&gt;
&lt;br /&gt;
  #bitbake-diffsigs /mnt/sstate/sstate-packagename-checksum1-tgz.siginfo builddir/sstate-cache/sstate-packagename-checksum2-tgz.siginfo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== know issues ==&lt;br /&gt;
* Various build distributions may lead to checksum mismatch&lt;br /&gt;
* New changeset may invalidate whole sstate cache occasionally&lt;br /&gt;
*  Bbclass&lt;br /&gt;
*  Base packages (libc, …)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=774</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=774"/>
		<updated>2011-02-17T14:50:59Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* know issues */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;br /&gt;
&lt;br /&gt;
If there is no setscene tasks running, sstate cache can&#039;t be used because sstate function got broken or some environment value changes invalidate sstate cache. You can use following tools to check any changes between sstate cache and new produced one.&lt;br /&gt;
&lt;br /&gt;
  #bitbake-diffsigs /mnt/sstate/sstate-packagename-checksum1-tgz.siginfo builddir/sstate-cache/sstate-packagename-checksum2-tgz.siginfo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== know issues ==&lt;br /&gt;
* Various build distributions may lead to checksum mismatch&lt;br /&gt;
* New changeset may invalidate whole sstate cache occasionally&lt;br /&gt;
Bbclass&lt;br /&gt;
Base packages (libc, …)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=773</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=773"/>
		<updated>2011-02-17T14:48:47Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Verify the sstate is working */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;br /&gt;
&lt;br /&gt;
If there is no setscene tasks running, sstate cache can&#039;t be used because sstate function got broken or some environment value changes invalidate sstate cache. You can use following tools to check any changes between sstate cache and new produced one.&lt;br /&gt;
&lt;br /&gt;
  #bitbake-diffsigs /mnt/sstate/sstate-packagename-checksum1-tgz.siginfo builddir/sstate-cache/sstate-packagename-checksum2-tgz.siginfo&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== know issues ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=772</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=772"/>
		<updated>2011-02-17T14:47:37Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Verify the sstate is working */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;br /&gt;
&lt;br /&gt;
If there is no setscene tasks running, sstate cache can&#039;t be used because sstate function got broken or some environment value changes invalidate sstate cache. You can use following tools to check any changes between sstate cache and new produced one.&lt;br /&gt;
&lt;br /&gt;
  #bitbake-diffsigs /mnt/sstate/sstate-packagename-checksum1-tgz.siginfo builddir/sstate-cache/sstate-packagename-checksum2-tgz.siginfo&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=771</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=771"/>
		<updated>2011-02-17T14:37:32Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Verify the sstate is working */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;br /&gt;
&lt;br /&gt;
If there is no setscene tasks running, sstate cache can&#039;t be used because sstate function got broken or some environment value changes invalidate sstate cache. You can use following tools to check what changes happened.&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=770</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=770"/>
		<updated>2011-02-17T14:32:25Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Verify the sstate is working */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;br /&gt;
If sstate works, you can see something like:&lt;br /&gt;
&lt;br /&gt;
 NOTE: Preparing runqueue&lt;br /&gt;
 NOTE: Executing SetScene Tasks&lt;br /&gt;
 NOTE: Running setscene task 118 of 155 (virtual:native:/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/pseudo/pseudo_git.bb:do_populate_sysroot_setscene)&lt;br /&gt;
 NOTE: Running setscene task 119 of 155 (/home/lulianhao/poky-build/edwin/poky/meta/recipes-devtools/quilt/quilt-native_0.48.bb:do_populate_sysroot_setscene)&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=769</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=769"/>
		<updated>2011-02-17T14:26:13Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Use local sstate cache */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, because of race condition issue&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Verify the sstate is working ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=768</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=768"/>
		<updated>2011-02-17T14:16:31Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Use local sstate cache */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, as race condition issue&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=767</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=767"/>
		<updated>2011-02-17T14:15:01Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Use local sstate cache */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;$/share/sstate-cache&lt;br /&gt;
&lt;br /&gt;
So that each build would consume the sstate file if match, and produce new sstate file if not match.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Pls. do not start multiple builds at one time, as race condition issue&#039;&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=766</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=766"/>
		<updated>2011-02-17T14:12:40Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Use local sstate cache */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;br /&gt;
&lt;br /&gt;
You can make all the builds pointing to the same SSTATE_DIR to share sstate files between them, like this:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_DIR ?= &amp;quot;$/share/sstate-cache&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=765</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=765"/>
		<updated>2011-02-17T14:08:36Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== use sstate cache server ==&lt;br /&gt;
Another build machine is needed to produce the sstate file periodically. The default sstate files directory is &#039;&#039;&#039;sstate-cache&#039;&#039;&#039; under build dir, which is needed to be exported via NFS, http or ftp server.&lt;br /&gt;
&lt;br /&gt;
On your local machine, change the conf/local.conf as following:&lt;br /&gt;
&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* http://someserver.tld/share/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
for NFS server&lt;br /&gt;
 SSTATE_MIRRORS ?= &amp;quot;\&lt;br /&gt;
 file://.* file:///local/mounted/dir/sstate/&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Use local sstate cache ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=764</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=764"/>
		<updated>2011-02-17T13:31:02Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* sstate background */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment. With perfect case, we can achieve more than 80% time decreasing.&lt;br /&gt;
&lt;br /&gt;
== setup sstate cache server ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=763</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=763"/>
		<updated>2011-02-17T13:28:19Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /*  */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment.&lt;br /&gt;
&lt;br /&gt;
== setup sstate cache server ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=762</id>
		<title>Enable sstate cache</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Enable_sstate_cache&amp;diff=762"/>
		<updated>2011-02-17T13:26:35Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: Created page with &amp;#039;== sstate background == To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== sstate background ==&lt;br /&gt;
To speed up the build process, sstate provides a cache mechanism, where sstate files from server can be reused to avoid build from scratch if the producer and consumer of the sstate has same environment.&lt;br /&gt;
&lt;br /&gt;
==  ==&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=Processes_and_Activities&amp;diff=761</id>
		<title>Processes and Activities</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=Processes_and_Activities&amp;diff=761"/>
		<updated>2011-02-17T12:51:36Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Welcome to the Processes and Activities Page!&lt;br /&gt;
&lt;br /&gt;
* [[Best Known Methods (BKMs) for Package Updating]]&lt;br /&gt;
* [[Working Behind a Network Proxy]]&lt;br /&gt;
* [[SDK Generator]]&lt;br /&gt;
* [[QA]]&lt;br /&gt;
* [[Kernel]]&lt;br /&gt;
* [[Core]]&lt;br /&gt;
* [[Bugzilla Configuration and Bug Tracking]]&lt;br /&gt;
* [[Yocto Release Engineering]]&lt;br /&gt;
* [[Performance]]&lt;br /&gt;
* [[Program Management Plan]]&lt;br /&gt;
* [[Enable sstate cache]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=How_to_enable_KVM_for_Poky_qemu&amp;diff=394</id>
		<title>How to enable KVM for Poky qemu</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=How_to_enable_KVM_for_Poky_qemu&amp;diff=394"/>
		<updated>2010-12-22T06:04:01Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: /* Change the kvm dev ownership for non-root user */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= enable KVM for poky qemu =&lt;br /&gt;
== KVM introduction ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;KVM&#039;&#039; (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, &#039;&#039;&#039;kvm.ko&#039;&#039;&#039;, that provides the core virtualization infrastructure and a processor specific module, &#039;&#039;&#039;kvm-intel.ko&#039;&#039;&#039; or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.&lt;br /&gt;
&lt;br /&gt;
Compared with native qemu, a pure emulator, KVM has better performance based on virtualization as most of guest instruction can be executed directly on the host processor.&lt;br /&gt;
&lt;br /&gt;
== detect VT support ==&lt;br /&gt;
You need make sure your x86 processor support VT before using KVM.&lt;br /&gt;
&lt;br /&gt;
With a recent enough Linux kernel, run the command:&lt;br /&gt;
&lt;br /&gt;
 $ egrep &#039;^flags.*(vmx|svm)&#039; /proc/cpuinfo&lt;br /&gt;
&lt;br /&gt;
If something shows up, you have VT. You can also check the processor model name (in `/proc/cpuinfo`) in the vendor&#039;s web site.&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
&lt;br /&gt;
* You&#039;ll never see (vmx|svm) in /proc/cpuinfo if you&#039;re currently running in in a dom0 or domU. The Xen hypervisor suppresses these flags in order to prevent hijacking.&lt;br /&gt;
&lt;br /&gt;
* Some manufacturers disable VT in the machine&#039;s BIOS, in such a way that it cannot be re-enabled.&lt;br /&gt;
&lt;br /&gt;
* `/proc/cpuinfo` only shows virtualization capabilities starting with Linux 2.6.15 (Intel) and Linux 2.6.16 (AMD). Use the `uname -r` command to query your kernel version.&lt;br /&gt;
&lt;br /&gt;
In case of doubt, contact your hardware vendor.&lt;br /&gt;
&lt;br /&gt;
== get the KVM module ==&lt;br /&gt;
&lt;br /&gt;
The quick &amp;amp; easy way for this is just using it from your distribution. As of now, all major community and enterprise distributions contain kvm kernel modules; these are either installed by default or require installing a kvm package. For someone looking for stability, these are the best choice. No effort is needed to build or install the modules, support is provided by the distribution, and the distribution/module combination is well tested.&lt;br /&gt;
&lt;br /&gt;
In other cases, you need refer [http://www.linux-kvm.org/page/Getting_the_kvm_kernel_modules getting kvm modules].&lt;br /&gt;
&lt;br /&gt;
After getting the modules, you can install them into kernel by&lt;br /&gt;
&lt;br /&gt;
 $ sudo modprobe kvm-intel&lt;br /&gt;
&lt;br /&gt;
== make KVM aware QEMU ==&lt;br /&gt;
Upstream QEMU has already supported KVM, and I have already checked in one patch to enable it. So you get a KVM capable qemu after poky build.&lt;br /&gt;
&lt;br /&gt;
== Change the kvm dev ownership for non-root user ==&lt;br /&gt;
qemu is started as non-root user in poky, but &#039;&#039;/dev/kvm&#039;&#039; from some distribution remains root:root that only allow root to use KVM. To work around this, we need create a new group named &#039;&#039;&#039;kvm&#039;&#039;&#039;, and make both &#039;&#039;/dev/kvm&#039;&#039; and the non-root user belongs to it.&lt;br /&gt;
&lt;br /&gt;
 $ sudo addgroup --system kvm&lt;br /&gt;
 $ sudo adduser $USER kvm&lt;br /&gt;
 $ sudo chown root:kvm /dev/kvm&lt;br /&gt;
 $ sudo chmod 0660 /dev/kvm&lt;br /&gt;
&lt;br /&gt;
The output of &amp;quot;ls -l /dev/kvm&amp;quot; should be like this:&lt;br /&gt;
 crw-rw----+ 1 root kvm 10, 232 2010-07-02 09:27 /dev/kvm&lt;br /&gt;
&lt;br /&gt;
On a system that runs udev, you will probably need to add the following line somewhere in your udev configuration so it will automatically give the right group to the newly created device (i-e for ubuntu add a line to /etc/udev/rules.d/40-permissions.rules).&lt;br /&gt;
&lt;br /&gt;
 KERNEL==&amp;quot;kvm&amp;quot;, GROUP=&amp;quot;kvm&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Note&lt;br /&gt;
* You need log out then log in the non-root user to take the effect&lt;br /&gt;
* Some distributions have already made this, so that we can skip this step&lt;br /&gt;
&lt;br /&gt;
== running qemu with KVM enabled ==&lt;br /&gt;
just append &amp;quot;kvm&amp;quot; parameter to the &#039;&#039;poky-qemu&#039;&#039; like this&lt;br /&gt;
&lt;br /&gt;
 $ poky-qemu qemux86 kvm&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=How_to_enable_KVM_for_Poky_qemu&amp;diff=393</id>
		<title>How to enable KVM for Poky qemu</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=How_to_enable_KVM_for_Poky_qemu&amp;diff=393"/>
		<updated>2010-12-22T01:20:56Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: Created page with &amp;#039;= enable KVM for poky qemu = == KVM introduction ==  &amp;#039;&amp;#039;KVM&amp;#039;&amp;#039; (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualizat…&amp;#039;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;= enable KVM for poky qemu =&lt;br /&gt;
== KVM introduction ==&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;KVM&#039;&#039; (for Kernel-based Virtual Machine) is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, &#039;&#039;&#039;kvm.ko&#039;&#039;&#039;, that provides the core virtualization infrastructure and a processor specific module, &#039;&#039;&#039;kvm-intel.ko&#039;&#039;&#039; or kvm-amd.ko. KVM also requires a modified QEMU although work is underway to get the required changes upstream.&lt;br /&gt;
&lt;br /&gt;
Compared with native qemu, a pure emulator, KVM has better performance based on virtualization as most of guest instruction can be executed directly on the host processor.&lt;br /&gt;
&lt;br /&gt;
== detect VT support ==&lt;br /&gt;
You need make sure your x86 processor support VT before using KVM.&lt;br /&gt;
&lt;br /&gt;
With a recent enough Linux kernel, run the command:&lt;br /&gt;
&lt;br /&gt;
 $ egrep &#039;^flags.*(vmx|svm)&#039; /proc/cpuinfo&lt;br /&gt;
&lt;br /&gt;
If something shows up, you have VT. You can also check the processor model name (in `/proc/cpuinfo`) in the vendor&#039;s web site.&lt;br /&gt;
&lt;br /&gt;
Note:&lt;br /&gt;
&lt;br /&gt;
* You&#039;ll never see (vmx|svm) in /proc/cpuinfo if you&#039;re currently running in in a dom0 or domU. The Xen hypervisor suppresses these flags in order to prevent hijacking.&lt;br /&gt;
&lt;br /&gt;
* Some manufacturers disable VT in the machine&#039;s BIOS, in such a way that it cannot be re-enabled.&lt;br /&gt;
&lt;br /&gt;
* `/proc/cpuinfo` only shows virtualization capabilities starting with Linux 2.6.15 (Intel) and Linux 2.6.16 (AMD). Use the `uname -r` command to query your kernel version.&lt;br /&gt;
&lt;br /&gt;
In case of doubt, contact your hardware vendor.&lt;br /&gt;
&lt;br /&gt;
== get the KVM module ==&lt;br /&gt;
&lt;br /&gt;
The quick &amp;amp; easy way for this is just using it from your distribution. As of now, all major community and enterprise distributions contain kvm kernel modules; these are either installed by default or require installing a kvm package. For someone looking for stability, these are the best choice. No effort is needed to build or install the modules, support is provided by the distribution, and the distribution/module combination is well tested.&lt;br /&gt;
&lt;br /&gt;
In other cases, you need refer [http://www.linux-kvm.org/page/Getting_the_kvm_kernel_modules getting kvm modules].&lt;br /&gt;
&lt;br /&gt;
After getting the modules, you can install them into kernel by&lt;br /&gt;
&lt;br /&gt;
 $ sudo modprobe kvm-intel&lt;br /&gt;
&lt;br /&gt;
== make KVM aware QEMU ==&lt;br /&gt;
Upstream QEMU has already supported KVM, and I have already checked in one patch to enable it. So you get a KVM capable qemu after poky build.&lt;br /&gt;
&lt;br /&gt;
== Change the kvm dev ownership for non-root user ==&lt;br /&gt;
qemu is started as non-root user in poky, but &#039;&#039;/dev/kvm&#039;&#039; from some distribution remains root:root that only allow root to use KVM. To work around this, we need create a new group named &#039;&#039;&#039;kvm&#039;&#039;&#039;, and make both &#039;&#039;/dev/kvm&#039;&#039; and the non-root user belongs to it.&lt;br /&gt;
&lt;br /&gt;
 $ sudo addgroup --system kvm&lt;br /&gt;
 $ sudo adduser $USER kvm&lt;br /&gt;
 $ sudo chown root:kvm /dev/kvm&lt;br /&gt;
 $ sudo chmod 0660 /dev/kvm&lt;br /&gt;
&lt;br /&gt;
The output of &amp;quot;ls -l /dev/kvm&amp;quot; should be like this:&lt;br /&gt;
 crw-rw----+ 1 root kvm 10, 232 2010-07-02 09:27 /dev/kvm&lt;br /&gt;
&lt;br /&gt;
Note&lt;br /&gt;
* You need log out then log in the non-root user to take the effect&lt;br /&gt;
* Some distributions have already made this, so that we can skip this step&lt;br /&gt;
&lt;br /&gt;
== running qemu with KVM enabled ==&lt;br /&gt;
just append &amp;quot;kvm&amp;quot; parameter to the &#039;&#039;poky-qemu&#039;&#039; like this&lt;br /&gt;
&lt;br /&gt;
 $ poky-qemu qemux86 kvm&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
	<entry>
		<id>https://wiki.yoctoproject.org/wiki/index.php?title=BSPs&amp;diff=392</id>
		<title>BSPs</title>
		<link rel="alternate" type="text/html" href="https://wiki.yoctoproject.org/wiki/index.php?title=BSPs&amp;diff=392"/>
		<updated>2010-12-22T01:17:25Z</updated>

		<summary type="html">&lt;p&gt;Gzhai: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;* [[Poky Contributions]]&lt;br /&gt;
* [[Poky NFS Root]]&lt;br /&gt;
* [[Wind River Kernel]]&lt;br /&gt;
* [[Merging Packages from OpenEmbedded]]&lt;br /&gt;
* [[How to turn on Poky Audio on Netbook]]&lt;br /&gt;
* [[How to Build Target Application in the Host Machine]]&lt;br /&gt;
* [[How to enable KVM for Poky qemu]]&lt;/div&gt;</summary>
		<author><name>Gzhai</name></author>
	</entry>
</feed>