<?xml version='1.0' encoding='UTF-8'?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/" xmlns:blogger="http://schemas.google.com/blogger/2008" xmlns:georss="http://www.georss.org/georss" xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr="http://purl.org/syndication/thread/1.0" version="2.0"><channel><atom:id>tag:blogger.com,1999:blog-4267025812143527462</atom:id><lastBuildDate>Fri, 01 Nov 2024 10:40:33 +0000</lastBuildDate><category>biometric recognition</category><category>Methodology biometric recognition</category><category>Biometric News</category><category>Biometric Software</category><category>literature review</category><category>Feature Extraction</category><category>Neural Network</category><category>Report Outline</category><category>Backpropagation Neural Network</category><category>Barska Biometric Safe</category><category>Biometric Security on Your Laptop</category><category>Biometric Time Clocks</category><category>Eigenvalue</category><category>Eigenvector</category><category>Face Recognition Result and Discussion</category><category>Facial Recognition Gone Wrong</category><category>Finger Biometric</category><category>Food Service Solutions</category><category>Gunvault</category><category>Gunvault GVB1000 Mini Vault</category><category>Gunvault GVB1000 Mini Vault Overview</category><category>Introduction Biometric Recognition</category><category>Neural Network Implementation</category><category>Normalization Technique</category><category>Principal Component Analysis</category><category>biometric identification</category><category>biometric school lunch program</category><category>face recognition</category><category>fingerprint biometric</category><category>laptop biometrics</category><category>school lunch biometric fingerprint solutions</category><category>school lunch biometric systems</category><title>Biometric Recognition</title><description>Biometric News,Article, System and Algorithm Biometric Face Recognition using Principal Component Analysis (PCA) and Backpropagation Neural Network.</description><link>http://biometric-recognition.blogspot.com/</link><managingEditor>noreply@blogger.com (Firdaus)</managingEditor><generator>Blogger</generator><openSearch:totalResults>15</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-8611743727563812471</guid><pubDate>Sun, 14 Aug 2011 07:49:00 +0000</pubDate><atom:updated>2011-08-14T00:51:45.517-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Biometric News</category><category domain="http://www.blogger.com/atom/ns#">Biometric Time Clocks</category><title>Biometric Time Clocks Gaining Popularity Amongst HR Professionals</title><description>&lt;span class=&quot;Apple-style-span&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;div id=&quot;article-content&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;With the latest recession having huge impact on businesses, many employers have been forced to cut costs with any means possible.&amp;nbsp; Usually the first thing to happen is the reduction in staff.&amp;nbsp; However, what many employers fail to realize that instead of reducing staff, they can maximize the use of their existing HR budgets by keeping better records of their hourly employees.&amp;nbsp; One of the latest trends sweeping the globe that can help curb unnecessary costs is by using a biometric time clock.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;iframe align=&quot;left&quot; frameborder=&quot;0&quot; marginheight=&quot;0&quot; marginwidth=&quot;0&quot; scrolling=&quot;no&quot; src=&quot;http://rcm.amazon.com/e/cm?t=httpoverthesh-20&amp;amp;o=1&amp;amp;p=8&amp;amp;l=bpl&amp;amp;asins=B0040BBRI4&amp;amp;fc1=000000&amp;amp;IS2=1&amp;amp;lt1=_blank&amp;amp;m=amazon&amp;amp;lc1=0000FF&amp;amp;bc1=000000&amp;amp;bg1=FFFFFF&amp;amp;f=ifr&quot; style=&quot;align: left; height: 245px; padding-right: 10px; padding-top: 5px; width: 131px;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;While electronic time clocks have been popular as well in recent years, they definitely have their shortcomings.&amp;nbsp; For example, employees who use PINS or swipe access cards to punch in/out are easily able to manipulate the system by having their friends enter their PIN or swipe their badge for them.&amp;nbsp; This is often referred to as buddy punching and costs companies worldwide millions of dollars every year.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The next step in the evolution of the time clock is to eliminate this possibility which is where biometric time clocks come in. These units act very similar to traditional time cards or time punching machines.&amp;nbsp; Instead of an employee punching a time card to begin or end their shift, they simply swipe their hand under a biometric scanner or fingerprint across a fingerprint reader.&lt;br /&gt;
&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The technology behind these machines has advanced significantly in recent years and is much more affordable to employers. Employees can no longer cheat the system and can even be restricted on areas of access in a building by where they are or are not allowed to enter.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The most popular types of&amp;nbsp;&lt;a href=&quot;http://www.avidbiometrics.com/Biometric-Time-Clocks-c5/&quot; rel=&quot;nofollow&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;biometric time clocks&lt;/a&gt;&amp;nbsp;use a , palm, or retinal scan to verify the identification of the employee and ensure that they should be granted access to punch in/out, open a confidential computer file, or enter a secure area of the building.&amp;nbsp; As fingerprints are impossible to forge, it makes the employer&#39;s job much easier in managing the HR costs and securing confidential information.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Most biometric time clocks are easy to program and integrate with software to help minimize the learning curve in using the machine.&amp;nbsp; Simple biometric time &amp;amp; attendance systems can be purchased around $200.&amp;nbsp; Depending on your HR needs, they can quickly grow to several thousand dollars; however, generally pay for themselves within several months of use.&lt;/div&gt;&lt;/div&gt;&lt;div id=&quot;article-resource&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;a href=&quot;http://www.avidbiometrics.com/&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;AvidBiometrics.com&lt;/a&gt;&amp;nbsp;is a leading resource for buying biometric time clocks. For more information on biometric time &amp;amp; attendance systems visit AvidBiometrics.com&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Article Source:&amp;nbsp;&lt;a href=&quot;http://ezinearticles.com/?expert=John_Stetson&quot; style=&quot;color: #1900ff;&quot;&gt;http://EzineArticles.com/?expert=John_Stetson&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;&lt;object height=&quot;349&quot; width=&quot;560&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://www.youtube.com/v/j3C_t77cJ5g?version=3&amp;amp;hl=en_US&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;/param&gt;&lt;embed src=&quot;http://www.youtube.com/v/j3C_t77cJ5g?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot; width=&quot;560&quot; height=&quot;349&quot; allowscriptaccess=&quot;always&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/08/biometric-time-clocks-gaining.html</link><author>noreply@blogger.com (Firdaus)</author><thr:total>2</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-6926670515724252898</guid><pubDate>Sun, 14 Aug 2011 07:43:00 +0000</pubDate><atom:updated>2011-08-14T00:43:20.552-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Biometric News</category><category domain="http://www.blogger.com/atom/ns#">Biometric Security on Your Laptop</category><category domain="http://www.blogger.com/atom/ns#">fingerprint biometric</category><category domain="http://www.blogger.com/atom/ns#">laptop biometrics</category><title>Biometric Security on Your Laptop</title><description>&lt;span class=&quot;Apple-style-span&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;div id=&quot;article-content&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;If you use a password to secure your laptop, your password might be able to be solved by someone else. However, if you use the biometric system, it would be very difficult to solve.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;This system is capable of detecting fingerprints or retina, so that only the owner can open the lock. By using biometric systems, other people will not be able to open the existing system on your laptop without having your fingerprints or retina.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;object height=&quot;349&quot; width=&quot;425&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://www.youtube.com/v/W8iCNu4Fy9U?version=3&amp;amp;hl=en_US&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;/param&gt;&lt;embed src=&quot;http://www.youtube.com/v/W8iCNu4Fy9U?version=3&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot; width=&quot;425&quot; height=&quot;349&quot; allowscriptaccess=&quot;always&quot; allowfullscreen=&quot;true&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/div&gt;That is why, some computer manufacturers are now offering laptops that are built with biometric fingerprint identification systems. This system is useful to prove the authenticity of the user based on the similarities of fingerprints that are stored when the software is first run.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;However, biometric devices have several disadvantages. Firstly, this device is difficult to use. Indeed, it is able to reduce the security risks, but it can be frustrating when it is used. For example is when the log in process experiences failure constantly.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Secondly, if you want your laptop to be equipped with biometric security device, you must purchase the additional equipment to connect it via a PC card or USB port. It is generally inexpensive, but it must be in accordance with the operation you have. You should also make sure that it is easy to connect.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The use of biometric device is considered more than the use of passwords for your laptop security. Installing fingerprint sensors on laptops can be more beneficial than using only a password or the encryption of data. In addition, it is very useful for securing data when it is combined with encryption software.&lt;/div&gt;&lt;/div&gt;&lt;div id=&quot;article-resource&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;If you have decided to use fingerprint as your&amp;nbsp;&lt;a href=&quot;http://biometricsecuritysystem.org/&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;biometric security system&lt;/a&gt;, so you need&amp;nbsp;&lt;a href=&quot;http://biometricsecuritysystem.org/usb-fingerprint-reader/&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;USB fingerprint reader&lt;/a&gt;. This is the advanced biometric technology with fingerprint application software for personal data security. It provides fingerprint functions in Windows Login, Screen Saver Lock, Web Account, FileFolder Encryption and PC Lock.&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Article Source:&amp;nbsp;&lt;a href=&quot;http://ezinearticles.com/?expert=Zane_L_Marquez&quot; style=&quot;color: #1900ff;&quot;&gt;http://EzineArticles.com/?expert=Zane_L_Marquez&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/08/biometric-security-on-your-laptop.html</link><author>noreply@blogger.com (Firdaus)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-868616536241942124</guid><pubDate>Sat, 23 Jul 2011 17:17:00 +0000</pubDate><atom:updated>2011-07-23T10:17:29.953-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Biometric Software</category><category domain="http://www.blogger.com/atom/ns#">Gunvault</category><category domain="http://www.blogger.com/atom/ns#">Gunvault GVB1000 Mini Vault</category><category domain="http://www.blogger.com/atom/ns#">Gunvault GVB1000 Mini Vault Overview</category><title>Is the Gunvault GVB1000 Mini Vault a Good Gun Safe?</title><description>&lt;div id=&quot;article-content&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;So, you&#39;re looking for a small but secure biometric (fingerprint recognition) gun safe. &lt;b&gt;Gunvault&lt;/b&gt; is one of the premier manufacturers of biometric safes, and the &lt;b&gt;Gunvault GVB1000 Mini Vault&lt;/b&gt; is no exception.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;iframe align=&quot;left&quot; frameborder=&quot;0&quot; marginheight=&quot;0&quot; marginwidth=&quot;0&quot; scrolling=&quot;no&quot; src=&quot;http://rcm.amazon.com/e/cm?t=httpoverthesh-20&amp;amp;o=1&amp;amp;p=8&amp;amp;l=bpl&amp;amp;asins=B001ABLN4A&amp;amp;fc1=000000&amp;amp;IS2=1&amp;amp;lt1=_blank&amp;amp;m=amazon&amp;amp;lc1=0000FF&amp;amp;bc1=000000&amp;amp;bg1=FFFFFF&amp;amp;f=ifr&quot; style=&quot;align: left; height: 245px; padding-right: 10px; padding-top: 5px; width: 131px;&quot;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;strong&gt;Gunvault GVB1000 Mini Vault Overview&lt;/strong&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;ul style=&quot;margin-bottom: 1em; margin-left: 2em; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Dimensions: 8.1 x 4.9 x 12&quot;&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Weight: 8 lbs&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Lock: Biometric fingerprint recognition&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Body: 16-gauge steel&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Interior: Coated in soft foam&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Requires 1 9-volt battery or included AC adapter&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Interior lighting&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Stores up to 30 different fingerprint profiles&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Generally fits one handgun and a clip or two&lt;/li&gt;
&lt;/ul&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The overall opinion of the&amp;nbsp;&lt;b&gt;&lt;em&gt;Gunvault GVB1000&lt;/em&gt;&amp;nbsp;&lt;/b&gt;across the internet and among enthusiasts is very positive. It works very well as a simple but effective personal handgun safe, especially for storing your firearm close to you at night or just for keeping your valuables safe. The safe is mountable to any flat surface, although we&#39;d obviously recommend installing it on something heavy or well built as the safe itself is not very heavy.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;strong&gt;So, How Secure is the Gunvault GVB1000 Mini Vault?&lt;/strong&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Like the other gun safes by &lt;b&gt;Gunvault&lt;/b&gt;, this is a biometric gun safe. It uses high tech fingerprint recognition technology for access, and is very accurate. It can store up to 30 different sets of fingerprints (although we&#39;re not sure why you&#39;d need to give that many people access) and it will not at all open for anyone whose fingerprints don&#39;t match. &lt;b&gt;Gunvault&#39;s biometric technology &lt;/b&gt;will continually update and refine the fingerprint profiles over time to make sure it is as accurate as possible.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The great thing about biometric safes is that they&#39;re&amp;nbsp;&lt;strong&gt;easy to access in the dark.&lt;/strong&gt;&amp;nbsp;Since there is no fumbling around with physical keys or combinations, and all you have to do is press your finger to the pad for 3 seconds, you can have quick access in the case of, say, someone breaking into your house at night. This is one of the things we really like about the GVB1000. It also has a nice, low-intensity interior light, which is handy in a scenario like this, too. If you can convince the wife to let you drill into the night stand to mount this gun safe, you&#39;ll have a perfect place to store your hand gun at night.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Unlike many other gun safes, the &lt;b&gt;Gunvault GVB1000&lt;/b&gt; requires a 9-volt battery. This could be a bit of a hassle, as not many electronics use 9-volt batteries these days. It does come with a backup AC adapter, though, and a single 9-volt battery is rated to last about a year in the unit, so it&#39;s not a huge problem.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;strong&gt;What&#39;s the Final Say On The GunVault GVB1000 Mini Vault?&lt;/strong&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Like most of &lt;b&gt;Gunvault&#39;s &lt;/b&gt;other products, this is a quality safe if you&#39;re using it for its intended purpose. It&#39;s inexpensively priced for the technology and &lt;b&gt;Gunvault&lt;/b&gt; has great customer service if you have any issue with the product, but you shouldn&#39;t have any problems, anyway. In conclusion, we would definitely recommend this gun safe.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;strong&gt;What did we like about the Gunvault GVB1000 Mini Vault Gun Safe?&lt;/strong&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;ul style=&quot;margin-bottom: 1em; margin-left: 2em; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Biometric technology is convenient and secure&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Works well as a bedside gun safe if bolted down&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Great value for the money&lt;/li&gt;
&lt;/ul&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;strong&gt;What didn&#39;t we like as much?&lt;/strong&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;ul style=&quot;margin-bottom: 1em; margin-left: 2em; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Requires a 9-volt battery&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Might need to record your fingerprint a few times before it&#39;s 100% accurate&lt;/li&gt;
&lt;/ul&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;strong&gt;This gun safe is for:&lt;/strong&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;ul style=&quot;margin-bottom: 1em; margin-left: 2em; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;People looking for a bedside gun safe&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Those looking for something inexpensive but still secure&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;People with children who want to keep their gun out of their hands&lt;/li&gt;
&lt;/ul&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;All in all, this is a great buy for the money.&lt;/div&gt;&lt;/div&gt;&lt;div id=&quot;article-resource&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;If you&#39;re on the market for a gun safe, I would&amp;nbsp;&lt;strong&gt;strongly&lt;/strong&gt;&amp;nbsp;suggest reading my site on&amp;nbsp;&lt;a href=&quot;http://bestgunsafereviews.net/&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;gun safe reviews&lt;/a&gt;&amp;nbsp;before deciding on a safe. There are many different models on the market, and not all are created equal. Get the unbiased scoop from me, a 20-year gun enthusiast. A&amp;nbsp;&lt;a href=&quot;http://bestgunsafereviews.net/biometric-gun-safe/&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;biometric gun safe&lt;/a&gt;&amp;nbsp;is a great addition to your home if you make the right purchase.&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;color: #333333; font-family: Arial; font-size: 12px; line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Article Source:&amp;nbsp;&lt;a href=&quot;http://ezinearticles.com/?expert=Frank_Gusso&quot; style=&quot;color: #1900ff;&quot;&gt;http://EzineArticles.com/?expert=Frank_Gusso&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;color: #333333; font-family: Arial;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-size: 12px; line-height: 18px;&quot;&gt;&lt;b&gt;&amp;nbsp;Gunvault GVB1000 Mini Vault&lt;/b&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;iframe allowfullscreen=&#39;allowfullscreen&#39; webkitallowfullscreen=&#39;webkitallowfullscreen&#39; mozallowfullscreen=&#39;mozallowfullscreen&#39; width=&#39;320&#39; height=&#39;266&#39; src=&#39;https://www.youtube.com/embed/SNRVZC9fGwk?feature=player_embedded&#39; frameborder=&#39;0&#39;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;div style=&quot;color: #333333; font-family: Arial; font-size: 12px; line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/07/is-gunvault-gvb1000-mini-vault-good-gun.html</link><author>noreply@blogger.com (Firdaus)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-3430148482650446504</guid><pubDate>Sat, 23 Jul 2011 17:11:00 +0000</pubDate><atom:updated>2011-07-23T10:19:28.970-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Barska Biometric Safe</category><category domain="http://www.blogger.com/atom/ns#">Biometric Software</category><category domain="http://www.blogger.com/atom/ns#">Finger Biometric</category><title>Barska Biometric Safe - Fingerprint Access To Your Valuables</title><description>&lt;div id=&quot;article-content&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;If you own a handgun and there are children in your household, the &lt;b&gt;Barska Biometric Safe &lt;/b&gt;could be the ideal solution. This safe only allows registered people access to the contents. This is achieved by the use of the Biometric Pad that can be programmed to recognise your fingerprints and will allow entry in about 3 seconds. The advantage of this system is because you do not require a key, there is no need to worry about finding it in an emergency, no need to fiddle with the lock, trying to fit the key. The Biometric Safe can even be opened in the dark. Imagine trying to remember a combination number if you are panicking - no need.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;iframe align=&quot;left&quot; frameborder=&quot;0&quot; marginheight=&quot;0&quot; marginwidth=&quot;0&quot; scrolling=&quot;no&quot; src=&quot;http://rcm.amazon.com/e/cm?t=httpoverthesh-20&amp;amp;o=1&amp;amp;p=8&amp;amp;l=bpl&amp;amp;asins=B002AQ0PFW&amp;amp;fc1=000000&amp;amp;IS2=1&amp;amp;lt1=_blank&amp;amp;m=amazon&amp;amp;lc1=0000FF&amp;amp;bc1=000000&amp;amp;bg1=FFFFFF&amp;amp;f=ifr&quot; style=&quot;align: left; height: 245px; padding-right: 10px; padding-top: 5px; width: 131px;&quot;&gt;&lt;/iframe&gt;&lt;strong&gt;Barska Biometric Safe&lt;/strong&gt;&amp;nbsp;&lt;/div&gt;&lt;ul style=&quot;margin-bottom: 1em; margin-left: 2em; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;It is a solidly built unit, weighing 31lbs.&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The internal measurements are 16.25W x 7H x14.25D&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Has mounting holes to fit to Floor or Wall complete with Mounting Kit&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Requires 4 x AA batteries&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Stores up to 30 fingerprint readings&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Also includes 2 Access Keys should the Batteries go flat&lt;/li&gt;
&lt;li style=&quot;line-height: 1.5em; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Comes with 1Yr limited Warranty&lt;/li&gt;
&lt;/ul&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Considering the size of the safe, it will accommodate a surprising amount of firearms and valuables. Reviews from owners state that it will house two guns with ammo and several spare magazines along with other valuables and documents,&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;One of the features is a beep when the safe is opened. This appears to be quite controversial with quite a few reviewers. Many said they would like to disable it, and some have gone to the trouble of disconnecting the buzzer.&amp;nbsp;&lt;em&gt;Just be aware this is in breach of the warranty terms.&lt;/em&gt;&amp;nbsp;The positive reason for having the buzzer, is that it will sound with any attempt to open the safe, and if for any reason the safe door is left open for more than a minute, the alarm will sound continuously until it is closed.&lt;br /&gt;
&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;The Safe stores up to 30 fingerprint readings, and because only 2 or 3 people usually require access to the safe it a good idea for these people to take multiple readings of their fingers and thumbs on both hands at several angles to make sure of minimum reading errors.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Another great feature of the &lt;b&gt;Barska Biometric Safe&lt;/b&gt; is that if the 4 x AA batteries go flat, the fingerprint reading data is retained in memory that is unaffected by the loss of power. Just replace the 4 x AA&#39;s and carry on using the safe without the need to reprogram.&lt;/div&gt;&lt;/div&gt;&lt;div id=&quot;article-resource&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;b&gt;Buying the Safe&lt;/b&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;If you require more info on the&amp;nbsp;&lt;a href=&quot;http://barska-biometricsafe.blogspot.com/2011/03/barska-biometric-safe.html&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;Barska Biometric Safe&lt;/a&gt;&amp;nbsp;then take a look at my Blog at&amp;nbsp;&lt;a href=&quot;http://barska-biometricsafe.blogspot.com/2011/03/barska-biometric-safe.html&quot; style=&quot;color: #1900ff;&quot; target=&quot;_new&quot;&gt;www.barska-biometricsafe.blogspot.com&lt;/a&gt;&amp;nbsp;where It will also show you where you can get a fantastic price discount.&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;color: #333333; font-family: Arial; font-size: 12px; line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Article Source:&amp;nbsp;&lt;a href=&quot;http://ezinearticles.com/?expert=Brian_Wilde&quot; style=&quot;color: #1900ff;&quot;&gt;http://EzineArticles.com/?expert=Brian_Wilde&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;color: #333333; font-family: Arial; font-size: 12px; line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;color: black; font-family: arial, sans-serif; line-height: normal;&quot;&gt;&lt;/span&gt;&lt;/div&gt;&lt;h1 id=&quot;watch-headline-title&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: initial; background-origin: initial; border-bottom-width: 0px; border-color: initial; border-left-width: 0px; border-right-width: 0px; border-style: initial; border-top-width: 0px; color: #333333; font-size: 1.8333em; height: 1.1363em; line-height: 1.1363em; margin-bottom: 5px; margin-left: 0px; margin-right: 0px; margin-top: 0px; max-height: 1.1363em; overflow-x: hidden; overflow-y: hidden; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;span class=&quot;long-title&quot; dir=&quot;ltr&quot; id=&quot;eow-title&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: initial; background-origin: initial; border-bottom-width: 0px; border-color: initial; border-left-width: 0px; border-right-width: 0px; border-style: initial; border-top-width: 0px; font-size: 0.9166em; letter-spacing: -0.5px; margin-bottom: 0px; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot; title=&quot;Tacticalgearhead.com review of Barska Biometric Safe&quot;&gt;Tacticalgearhead.com review of Barska Biometric Safe&lt;/span&gt;&lt;/h1&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;&lt;iframe allowfullscreen=&#39;allowfullscreen&#39; webkitallowfullscreen=&#39;webkitallowfullscreen&#39; mozallowfullscreen=&#39;mozallowfullscreen&#39; width=&#39;320&#39; height=&#39;266&#39; src=&#39;https://www.youtube.com/embed/LXvEVSYdH-s?feature=player_embedded&#39; frameborder=&#39;0&#39;&gt;&lt;/iframe&gt;&lt;/div&gt;&lt;div style=&quot;color: #333333; font-family: Arial; font-size: 12px; line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/07/barska-biometric-safe-fingerprint.html</link><author>noreply@blogger.com (Firdaus)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-213262115253624440</guid><pubDate>Sat, 23 Jul 2011 16:51:00 +0000</pubDate><atom:updated>2011-07-23T10:20:31.606-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric identification</category><category domain="http://www.blogger.com/atom/ns#">biometric school lunch program</category><category domain="http://www.blogger.com/atom/ns#">Biometric Software</category><category domain="http://www.blogger.com/atom/ns#">Food Service Solutions</category><category domain="http://www.blogger.com/atom/ns#">school lunch biometric fingerprint solutions</category><category domain="http://www.blogger.com/atom/ns#">school lunch biometric systems</category><title>The Honest Truth on Biometrics in Schools</title><description>&lt;div id=&quot;article-content&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;By now many school principals, superintendents and administrators have probably heard of school lunch &lt;b&gt;biometrics&lt;/b&gt;, or the use of devices such as fingerprint readers to recognize students and allow for the automated payment and accounting of school lunch purchases. Some may be wondering how to sort the promise from the hype, the information from the misinformation.&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEJCY0LfnDnImBONNwY78lcxhKwDPrwCvY6MWTOuOdfdz5Cuz1HQGhRPBTgcSYSNk7xdynJSHARUI7QIfYXaVgjwPbWyQgmhViOf3Z8-GfODd-XRNBYMTCXGsPuJAeLKT6noY5dDLr_WFW/s1600/children_img.jpg&quot; imageanchor=&quot;1&quot; style=&quot;clear: left; float: left; margin-bottom: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEJCY0LfnDnImBONNwY78lcxhKwDPrwCvY6MWTOuOdfdz5Cuz1HQGhRPBTgcSYSNk7xdynJSHARUI7QIfYXaVgjwPbWyQgmhViOf3Z8-GfODd-XRNBYMTCXGsPuJAeLKT6noY5dDLr_WFW/s1600/children_img.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;While &lt;b&gt;school lunch&lt;/b&gt; &lt;b&gt;biometrics&lt;/b&gt; can legitimately address a host of problems from slow lunch lines, lost lunch money, cumbersome payment, lunch fraud and bullying, to falling National School Lunch Program (NSLP) participation, the devil is in the details. Of course, it all comes down to the bottom line: labor, cost efficiency, and return on investment (ROI). Here I&#39;ll honestly discuss the pluses and minuses of school lunch &lt;b&gt;biometrics&lt;/b&gt; versus more traditional technologies so administrators can decide if it makes sense for their schools.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;How do school lunch &lt;b&gt;biometric systems &lt;/b&gt;work and do they protect privacy?&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;In most &lt;b&gt;school lunch biometric systems&lt;/b&gt;, students place a forefinger on a small fingerprint reader by the register. In seconds, the system translates the electronic print into a mathematical pattern, discards the fingerprint image, and matches the pattern to the student’s meal account information. Food Service Solutions (FSS) &lt;b&gt;biometric software&lt;/b&gt;, for example, plots 27 points on a grid that correspond with the fingerprint&#39;s ridges to achieve positive identification, but saves no actual fingerprint image.&lt;br /&gt;
&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;When &lt;b&gt;school lunch biometric systems&lt;/b&gt; like FSS&#39;s are numerically-based and discard the actual fingerprint image, they cannot be used for any purpose other than recognizing a student within a registered group of students. Since there&#39;s no stored fingerprint image, the data is useless to law enforcement, which requires actual fingerprint images. As there&#39;s no way for any fingerprint or computer expert to extract a record and reconstruct a person&#39;s fingerprint image from purely numerical data, privacy is protected.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Do &lt;b&gt;biometrics&lt;/b&gt; speed school lunch lines?&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Though some providers claim that &lt;b&gt;biometrics&lt;/b&gt; speed up every school lunch line, this isn&#39;t always the case. &lt;b&gt;Biometric systems&lt;/b&gt; will speed lunch lines where cash is primarily used because students, especially younger ones, are prone to losing or misplacing cash and extra time is taken to make correct change. They will speed lines over Personal Identification Number (PIN)-based systems, which take time to enter and students tend to forget. They&#39;ll also speed lines over magnetic card-based systems, which take time to fish out of pockets and swipe.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Because &lt;b&gt;biometric systems&lt;/b&gt; typically take a few seconds to recognize a student and access his or her account information, they&#39;re not necessarily faster than well-organized roster-based systems, where a name is checked of a list, or ticket-based systems where a color coated tickets are simply collected.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;A good &lt;b&gt;biometric system&lt;/b&gt;, however, will save a significant amount of administrative labor and cost. Because accounts are prepaid and students can never lose their finger for identification, it eliminates a number of time-consuming administrative problems such as lost lunch money, lunch money bullying, card replacement, or account fraud caused by stolen cards, overheard PIN numbers, or other cases of identity theft.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Moreover, because&lt;b&gt; biometric systems&lt;/b&gt; automate the payment and accounting of school lunches, they eliminate tedious backend administrative chores such as cash, ticket, or paper-based handling, accounting, reconciling, and oversight.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Do &lt;b&gt;biometric systems&lt;/b&gt; work with younger children?&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Administrators may have heard that &lt;b&gt;biometric systems&lt;/b&gt; either work with all younger children or none at all. Neither is true. The fact is that &lt;b&gt;biometric systems&lt;/b&gt; tend to have a higher misread rate on young children about age four or five, who are typically in preschool or kindergarten, because their fingerprints haven&#39;t sufficiently developed. On these younger children, a good &lt;b&gt;biometric system&lt;/b&gt; should have a successful identification rate of about 80 to 85 percent. On children and adults from about age six onward, a good &lt;b&gt;biometric system &lt;/b&gt;should successfully identify and debit about 96 to 97 percent, a figure substantially higher than most swipe cards or card readers. For the small number of students unsuccessfully identified by a &lt;b&gt;biometric system,&lt;/b&gt; administrators may want to have a back up system in place such as a last name lookup.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;&lt;b&gt;Biometric systems&lt;/b&gt; may also have difficulty recognizing a student undergoing a growth spurt, as their fingerprint pattern may change as their body grows. When this occurs, typically around grades five and nine, having a &lt;b&gt;biometric system &lt;/b&gt;that allows quick re-registration can be important. Because some systems enable re-registration in about a minute, this can occur right in the lunch line or towards the end of lunch.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Why is the identification success rate so important?&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Because a biometric system&#39;s student identification success rate can determine its success or failure in a school lunch program, administrators should consider how reliable and easy to maintain a system is before purchase. For better reliability and minimal maintenance, administrators should opt for optical biometric sensors, which function using light. These typically feature a special scratchproof glass made of a material as hard as quartz that requires no treatment or maintenance. They&#39;re also resistant to shock, corrosion, electrostatic discharge and extreme weather, while offering a larger imaging area that makes finger placement easier for more forgiving readings.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;On the other hand, capacitive sensors, which function using a computer chip or semiconductor, usually require surface treatments and protective coatings to protect from shock, electrostatic discharge, and other dangers. As the coatings wear, performance tends to degrade. Since the silicon chips are inherently fragile, they&#39;re also more susceptible to damage by scratches and rough handling. A typically smaller imaging area also requires stricter, more consistent finger placement for satisfactory student identification.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Why working with an experienced biometric provider is critical&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;For the same reason administrators wouldn&#39;t want a surgeon straight out of medical school operating on them, they may want to take a pass on inexperienced biometric system providers. New entrants to the school lunch biometric market, in fact, have been working in the field for as little as 18 months, which gives little time to work out the subtleties of successful installation. In contrast, some veteran biometric system providers have almost a decade of experience in implementing such systems in real-life school settings.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;In order to provide a maximum student identification success rate, the most experienced biometric system providers will consider subtleties such as fingerprint scanner placement, average student height and handedness. Administrators may also want to choose a system provider that allows students to use any point-of-sale register, even at other schools within the district, with a one-time registration. In contrast, some biometric providers require students to register at every register they intend to use. Getting such details right not only improves the system&#39;s student identification success rate, but also speeds recognition so lines move faster.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Why considering biometric system expandability is a must&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Besides student recognition, account debiting, and pre-payment, the most flexible school lunch biometric systems today offer administrators and parents some valuable extras.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;For example, one&lt;b&gt; biometric school lunch program&lt;/b&gt;, (www.myschoolaccount.com), has an online component that allows parents to pre-pay for school lunches as well as monitor their children&#39;s food choices. The technology even enables parents to restrict their children&#39;s choices to avoid &#39;special diet&#39; conflicts or to prevent children from purchasing high fat, high sugar a la carte items.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Once administrators get a &lt;b&gt;biometric school lunch program&lt;/b&gt; successfully up and running, some find that the system naturally extends to other school services such as attendance, tracking and boosting National School Lunch Program (NSLP) participation, or checking out textbooks and school library materials.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;For more information about &lt;b&gt;school lunch biometric identification systems&lt;/b&gt;, call (800) 425-1425; Fax (814) 941-7572; visit the website [http://www.foodserve.com;] or write to Food Services Solutions Inc. at 3101 Pleasant Valley Boulevard, Altoona, PA 16602.&lt;/div&gt;&lt;/div&gt;&lt;div id=&quot;article-resource&quot; style=&quot;color: #333333; font-family: Arial; font-size: 12px;&quot;&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;David Pisanick, of Food Service Solutions (www.foodserve.com), has over 20 years experience in food service technology and helped invent and pioneer the use of&lt;b&gt; biometric identification&lt;/b&gt; in school food service almost a decade ago.&lt;/div&gt;&lt;div style=&quot;line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Food Service Solutions (FSS) has implemented &lt;b&gt;school lunch biometric fingerprint solutions&lt;/b&gt; at over 1000 K-12 schools throughout the United States. With over 85 years of combined experience in institutional food service, FSS staff are dedicated to providing school personnel with fully integrated hardware/software systems that simplify food service and administration.&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;color: #333333; font-family: Arial; font-size: 12px; line-height: 1.5em; margin-bottom: 1em; margin-left: 0px; margin-right: 0px; margin-top: 0px; padding-bottom: 0px; padding-left: 0px; padding-right: 0px; padding-top: 0px;&quot;&gt;Article Source:&amp;nbsp;&lt;a href=&quot;http://ezinearticles.com/?expert=David_Pisanick&quot; style=&quot;color: #1900ff;&quot;&gt;http://EzineArticles.com/?expert=David_Pisanick&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;b&gt;image credit to&amp;nbsp;foodserve.com&lt;/b&gt;&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/07/honest-truth-on-biometrics-in-schools.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEJCY0LfnDnImBONNwY78lcxhKwDPrwCvY6MWTOuOdfdz5Cuz1HQGhRPBTgcSYSNk7xdynJSHARUI7QIfYXaVgjwPbWyQgmhViOf3Z8-GfODd-XRNBYMTCXGsPuJAeLKT6noY5dDLr_WFW/s72-c/children_img.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-5915156775688055608</guid><pubDate>Tue, 19 Jul 2011 14:03:00 +0000</pubDate><atom:updated>2011-07-19T07:03:56.929-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Biometric News</category><category domain="http://www.blogger.com/atom/ns#">Facial Recognition Gone Wrong</category><title>Facial Recognition Gone Wrong</title><description>&lt;blockquote&gt;&quot;John H. Gass hadn&#39;t had a traffic ticket in years, so the Natick resident was surprised this spring when he received a letter from the Massachusetts Registry of Motor Vehicles informing him to cease driving because his license had been revoked. It turned out Gass was &lt;a href=&quot;http://articles.boston.com/2011-07-17/news/29784761_1_fight-identity-fraud-facial-recognition-system-license&quot;&gt;flagged because he looks like another driver&lt;/a&gt;, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is. And apparently, he has company. Last year, the facial recognition system picked out more than 1,000 cases that resulted in State Police investigations, officials say. And some of those people are guilty of nothing more than looking like someone else. Not all go through the long process that Gass says he endured, but each must visit the Registry with proof of their identity. Massachusetts began using the software after receiving a $1.5 million grant from the US Department of Homeland Security as part of an effort to prevent terrorism, reduce fraud, and improve the reliability and accuracy of personal identification documents that states issue.&quot;&lt;/blockquote&gt;</description><link>http://biometric-recognition.blogspot.com/2011/07/facial-recognition-gone-wrong.html</link><author>noreply@blogger.com (Firdaus)</author><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-1265714975250816812</guid><pubDate>Mon, 28 Mar 2011 14:56:00 +0000</pubDate><atom:updated>2011-03-28T08:42:41.119-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Face Recognition Result and Discussion</category><title>Face Recognition Result and Discussion Part 1/4</title><description>&lt;div style=&quot;text-align: center;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;CHAPTER 4&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;RESULT AND DISCUSSION&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;4.1 Introduction&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt; &lt;/span&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;This chapter described the results produced based on the methodology explained in Chapter 3. The results are shown and discussions are provided for each experiment. These experiments are divided into three (3) main parts Principal Components Analysis, training and recognition result and experimental result. Prototype model that is designed for this research purpose is also demonstrated in this chapter.&amp;nbsp;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;b&gt;4.2 Principal Component Analysis&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Sample face images from ORL face dataset is shown in Figure 4.1 respectively. The sample showed seven different persons with different conditions. For easy explanations only three face images from each class or persons is taken as training set. Thus, 21 face images is used as a training set and 49 face images as testing set. The training set is then converted into a big matrix, &amp;nbsp;with its size &amp;nbsp;where m is the number of training set and P is equal to number of pixels of each face image.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiFtEPiP8lNrE3mdZA-69vh-2xN-vXq4ojoVtrQD6K7KffaS-UYcgpOg03JR0daE60ewl1FArDVmYe9YbniJmqrqXBkuVDHlG89UZVnFCyaQO-amI6VYOJaLcGI1Mt-YfEawnForCLQflo/s1600/Figure+4_1.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiFtEPiP8lNrE3mdZA-69vh-2xN-vXq4ojoVtrQD6K7KffaS-UYcgpOg03JR0daE60ewl1FArDVmYe9YbniJmqrqXBkuVDHlG89UZVnFCyaQO-amI6VYOJaLcGI1Mt-YfEawnForCLQflo/s1600/Figure+4_1.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 4.1: Example ORL dataset&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;The first step of combinations of those 21 face images produced a mean-face image (Figure 4.2). However this mean-face image does not have more information about the training set except for a sort of middle point. Thus this mean-face image only gives the set of middle face pixels among the training set.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGyFb0rlfr2FLwR5ay67zZiS5L3QbngQ6_2zOjgyjNYMtiGVj0m63lv5ptRnbo0ivR3J0Y8F_hsTzXRUEtCNlB15lptGexHyz5AGjJZkXvyCnC2dudQqkeaaMd8uCtQmpOAxiIR9f9pTno/s1600/Figure+4_2.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgGyFb0rlfr2FLwR5ay67zZiS5L3QbngQ6_2zOjgyjNYMtiGVj0m63lv5ptRnbo0ivR3J0Y8F_hsTzXRUEtCNlB15lptGexHyz5AGjJZkXvyCnC2dudQqkeaaMd8uCtQmpOAxiIR9f9pTno/s1600/Figure+4_2.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 4.2: Mean Face&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;To measure more information of those training face images , the covariance matrix is implemented to determine on how the matrix dimension is vary from the mean with respect to each dimension. Figure 4.3 shows the covariance matrix surface map where it was cleared shows the matrix is symmetrical about the main diagonal &amp;nbsp;and square . The exact value is not important as its sign. If the value is negative, it indicates that both dimension increase together. Otherwise if the value is negative, then as one dimension is decrease, the other decreases. In the last scene, if the covariance is zero, it indicates both dimensions are independent to each other. This covariance matrix is then used to calculate the eigenvalues and eigenvectors using numerical Jacobi’s method. Refer to Appendix A to find the exact values of covariance matrix.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFYaA_SwA-xlVW7b9MxUNOI51g8OxxXJWajapwPrRXlNCZh39W9AbaS_JHl_WtYTgKBtbcmUtDAHNHtccUM3Cx_23nVnb3exTvpJ-1LhIgBbVw_w-aRKu8W2W14RqtF9S0Dxh_PjWb95ta/s1600/Figure+4.3.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiFYaA_SwA-xlVW7b9MxUNOI51g8OxxXJWajapwPrRXlNCZh39W9AbaS_JHl_WtYTgKBtbcmUtDAHNHtccUM3Cx_23nVnb3exTvpJ-1LhIgBbVw_w-aRKu8W2W14RqtF9S0Dxh_PjWb95ta/s1600/Figure+4.3.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 4.3: Covariance matrix from &amp;nbsp;surface map&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;From Figure 4.4, the eigenvalues decreases quickly as their number increases. The eigenvectors with higher eigenvalues provide more information. The all-off main diagonals are zero except the main diagonal which stored eigenvalues. The eigenvectors (Figure 4.5) is showed with its specific eigenvalues where the lower eigenvalues obtained a similar value for a set of eigenvectors. The &amp;nbsp;eigenvector of a matrix determine a direction in which the effect of the matrix is particularly simple: the matrix is expands or shrinks any vector lying in that direction by a scalar multiple and the expansion or contraction factors is given by the corresponding eigenvalues (M.T. Health, 2002). Refer to Appendix B for eigenvalues and Appendix C for eigenvectors.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtnv1KeAhCXEpHoPNdgZC-RWGA2gal2a0I9FTPSf780s78IRYRPTAUDj38PcBMVjeiUZqNDZ7RxwQJIwhY6dhjmfeKyZ993KrsLHs_PV7EbX-DY5tAjZ8BhDxSH1aJEXfmHPkQczgNcUX9/s1600/Figure+4.4.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjtnv1KeAhCXEpHoPNdgZC-RWGA2gal2a0I9FTPSf780s78IRYRPTAUDj38PcBMVjeiUZqNDZ7RxwQJIwhY6dhjmfeKyZ993KrsLHs_PV7EbX-DY5tAjZ8BhDxSH1aJEXfmHPkQczgNcUX9/s1600/Figure+4.4.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 4.4: Eigenvalues matrix surface map&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt; &lt;/span&gt;&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLDh4pMmC3Z4trnYj7oEO3OCvBwiJGRDwXQXtgAl6eM1M7SEUVEms5HTkHYpKY4oGTdc2fP9DHaWqXtaCt5LlC0cKoq970iqe6dYVBNuNe2dpYjgqxzMhyiDVZJk83MnIOxPJlmN3vtwLe/s1600/Figure+4.5.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;266&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiLDh4pMmC3Z4trnYj7oEO3OCvBwiJGRDwXQXtgAl6eM1M7SEUVEms5HTkHYpKY4oGTdc2fP9DHaWqXtaCt5LlC0cKoq970iqe6dYVBNuNe2dpYjgqxzMhyiDVZJk83MnIOxPJlmN3vtwLe/s320/Figure+4.5.jpg&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 4.5: Eigenvector matrix surface map&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;b&gt;4.2.1 Eigenfaces&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;This experiment mainly used the PCA to extract important features within the trained face images. The research implements the eigenvectors through the face images, these face images is known as a sort of ghostly face images which is called as eigenfaces. Figure 4.6 show the set of eigenfaces corresponding to their trained face images. Each eigenfaces deviates from uniform gray where some facial feature differs among the set of training faces. Eigenfaces can be viewed as a sort of map of the variations between faces.&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUPb1ayuBWI6lFNptHj_AAKxrQijpenKMvTfj6JdqiYUE6A_QKzZofcGH3C7tYJBPbXw_CONrpNCwaNxCgvTbtKXN6mq-s6sseTMJo9yEVgeIM1zjWrC6cHjK7q0OtMF6iHEWY18Bkm-O8/s1600/Figure+4.6.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjUPb1ayuBWI6lFNptHj_AAKxrQijpenKMvTfj6JdqiYUE6A_QKzZofcGH3C7tYJBPbXw_CONrpNCwaNxCgvTbtKXN6mq-s6sseTMJo9yEVgeIM1zjWrC6cHjK7q0OtMF6iHEWY18Bkm-O8/s1600/Figure+4.6.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/span&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 4.6: Set of eigenfaces obtained through PCA&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;b&gt;4.2.2 Feature Extraction&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;This section describes the weight features &amp;nbsp; produced by previous set of eigenfaces (Figure 4.6) with mean-adjusted face images. &amp;nbsp;Entire examples of this section are given for class 1 and 2 for trained class and class 8 for untrained class. &amp;nbsp;Refer to Appendix D for full result.&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Tables 4.1 illustrate the original weight features with only M = 7 eigenfaces were used to perform weight vectors where M is equal the number of class or individual used for training phase (M. Turk, 1990).&amp;nbsp;&lt;/span&gt;&lt;br /&gt;
&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.1a: Original weight features for Trained Face&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKx3ocWwduWI4MgYucFhI7aBehX5UGMBK42CCzJnY-405R_xlvQjcOirJwG6yAnzC9YSNAWv5Ue_uKNkel-c2hyVxzTgZXV54bc5JXsMmYaPMXfMSFPNGVryH8xgcFAfOjK8Um5x8FvJmf/s1600/Table+4.1a.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjKx3ocWwduWI4MgYucFhI7aBehX5UGMBK42CCzJnY-405R_xlvQjcOirJwG6yAnzC9YSNAWv5Ue_uKNkel-c2hyVxzTgZXV54bc5JXsMmYaPMXfMSFPNGVryH8xgcFAfOjK8Um5x8FvJmf/s1600/Table+4.1a.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.1b: Original weight features for untrained face belong to the same class&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrKG0kguo2qe9VoFrmmD0vjdexI2hxXwdI587-35U0AfWOKzv4_uN_QQ-NYNebgfOL4cDSCm0WdjtPemB7SIBvpZih4rgdMAPlvG4N4ewOzYCR7ELf2MQj0wd7CF9mxIBTI70_hyiEauyd/s1600/Table+4.1b.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrKG0kguo2qe9VoFrmmD0vjdexI2hxXwdI587-35U0AfWOKzv4_uN_QQ-NYNebgfOL4cDSCm0WdjtPemB7SIBvpZih4rgdMAPlvG4N4ewOzYCR7ELf2MQj0wd7CF9mxIBTI70_hyiEauyd/s1600/Table+4.1b.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;div&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;4.2.3 Normalization&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Based on equation in previous chapter, the original weight vectors are normalized to transform the output to meet neural network algorithm. Table 4.2 shows the simple normalization which is clearly showed the normalization is within 0 and 1.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.2a: Simple Normalization for Trained Face&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjNHlkLC5ZVi_Fc0eIalP-MWCGG51LvkzKci3CeNMyAUSSeN-5-dW-oFMBrbxZrn_0nc-cGtO1Sbj-mzmrJqQS0cMBI6njiMfv6r1G3VQTA_4TV4wr93OyJhP33wao-yvWww0eXm4xb0N4/s1600/Table+4.2a.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjjNHlkLC5ZVi_Fc0eIalP-MWCGG51LvkzKci3CeNMyAUSSeN-5-dW-oFMBrbxZrn_0nc-cGtO1Sbj-mzmrJqQS0cMBI6njiMfv6r1G3VQTA_4TV4wr93OyJhP33wao-yvWww0eXm4xb0N4/s1600/Table+4.2a.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;line-height: 150%; text-align: center;&quot;&gt;&lt;span lang=&quot;EN-US&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.2b: Simple Normalization for Untrained face belong to the same class&amp;nbsp;&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2oaWTmS-kA4usvcYUH6hwMz6S5q6viOBi9wX4SiyyQL1m3EL1E7QPNTPzVK7QJfSV4EgaTUnJhKqvAKeoP_bY00gjyEo-q8QDQIF_cE8IaRz0L-nEr9_l36zZjc6L6tK3B5U37xSEcXkJ/s1600/Table+4.2b.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg2oaWTmS-kA4usvcYUH6hwMz6S5q6viOBi9wX4SiyyQL1m3EL1E7QPNTPzVK7QJfSV4EgaTUnJhKqvAKeoP_bY00gjyEo-q8QDQIF_cE8IaRz0L-nEr9_l36zZjc6L6tK3B5U37xSEcXkJ/s1600/Table+4.2b.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;line-height: 150%; text-align: left;&quot;&gt;&lt;span lang=&quot;EN-US&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span lang=&quot;EN-US&quot;&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 22px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Whereas, Table 4.3 demonstrate the Improve Unit Range (IUR) normalization with the number range is within 0.1 to 0.9. This range of number is simply can be adjusted through the formula at chapter 3.&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 22px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 22px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.3a: IUR normalization for Trained Face&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwpiHroJGKFnvl5Lt6GYgGr3o035Dq5iHk2Ydld6jOWZqhW05MWUQjK_hf_hXZSy0sw7yOzMi-fHiIJSOHw4C3jVQNXCYAtoDPT3IGTADO-jMqxriYtfBP1ZUOl9udIepxJx-DaSGR6WC_/s1600/Table+4.3a.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhwpiHroJGKFnvl5Lt6GYgGr3o035Dq5iHk2Ydld6jOWZqhW05MWUQjK_hf_hXZSy0sw7yOzMi-fHiIJSOHw4C3jVQNXCYAtoDPT3IGTADO-jMqxriYtfBP1ZUOl9udIepxJx-DaSGR6WC_/s1600/Table+4.3a.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 22px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 21px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.3b: IUR Normalization for Untrained face belong to the same class&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj44YU2A5fdg29t9R2W1JzTrsmhtvBQ-5TqvyAufQ6A3fQHJ2sIZvzTyqL8QsIk0k9Kk9yBYjv5E0U3oAeNHso-gYTFs5NXLK4IBsjPEgeMLCBAQs4q-Sde_kXfio_3WJ8UJ9D4QeK9NG4n/s1600/Table+4.3b.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj44YU2A5fdg29t9R2W1JzTrsmhtvBQ-5TqvyAufQ6A3fQHJ2sIZvzTyqL8QsIk0k9Kk9yBYjv5E0U3oAeNHso-gYTFs5NXLK4IBsjPEgeMLCBAQs4q-Sde_kXfio_3WJ8UJ9D4QeK9NG4n/s1600/Table+4.3b.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 21px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 20px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The last normalization technique used in the experiments which known as Improve Linear Scaling (ILS) is shown in Table 4.4. The range of the number is still 0 to 1 but the minimum and the maximum is unknown because it’s calculating the variance among the original data. Refer to Appendix E to G for detail result of all normalization technique.&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 20px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 20px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.4a: ILS normalization for Trained Face&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif; line-height: 20px;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgA4UTMsSW2kt_PSuDRlyASA5_CYJHYuiZpNL_6Vt2zvAcQvggeOHTu239NiPXEJyuNhIqRCg-MMZoSYuq9pxO2ZNVDaxw6ZS1YtylIlvJZS4E5CJhBdxZQ9gyRAzD9X5bxUuvVL0rSDbaO/s1600/Table+4.4a.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgA4UTMsSW2kt_PSuDRlyASA5_CYJHYuiZpNL_6Vt2zvAcQvggeOHTu239NiPXEJyuNhIqRCg-MMZoSYuq9pxO2ZNVDaxw6ZS1YtylIlvJZS4E5CJhBdxZQ9gyRAzD9X5bxUuvVL0rSDbaO/s1600/Table+4.4a.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 20px;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 20px;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Table 4.4b: ILS Normalization for Untrained face belong to the same class&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0Tn_c_BlX-kaVFz7SlhyICtEmCpHYRXxwEo6c9eQenoyyrZNJDp3bir88JYLqnnlGrDV6ihtZAyxc1yBYWuXh8a0NlmDnRuihJ2KJ60H1_EKkAmWk9YimhCK76CD-1odvexyjxpVttW-U/s1600/Table+4.4b.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0Tn_c_BlX-kaVFz7SlhyICtEmCpHYRXxwEo6c9eQenoyyrZNJDp3bir88JYLqnnlGrDV6ihtZAyxc1yBYWuXh8a0NlmDnRuihJ2KJ60H1_EKkAmWk9YimhCK76CD-1odvexyjxpVttW-U/s1600/Table+4.4b.jpg&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;line-height: 19px;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/03/face-recognition-result-and-discussion.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgiFtEPiP8lNrE3mdZA-69vh-2xN-vXq4ojoVtrQD6K7KffaS-UYcgpOg03JR0daE60ewl1FArDVmYe9YbniJmqrqXBkuVDHlG89UZVnFCyaQO-amI6VYOJaLcGI1Mt-YfEawnForCLQflo/s72-c/Figure+4_1.jpg" height="72" width="72"/><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-4979525663337500933</guid><pubDate>Sun, 20 Mar 2011 17:02:00 +0000</pubDate><atom:updated>2011-03-28T08:41:58.157-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Methodology biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Normalization Technique</category><title>Biometric Recognition Methodology part 4/4 - Normalization</title><description>&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.6 Normalization&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Since the value of Backpropagation neural network required input range from zero (0) to one (1) as used the sigmoid activation function and it is found that most of the result produced in feature extraction using eigenfaces are not in particular range, thus the normalization process are required. In these proposed neural network, three types of different normalizations techniques had been selected for this reason [Puteh Saad, 2001]. They are the simple unit range (SUR), improve unit range (IUR) and improved linear scaling (ILS). Equation (3.42), (3.43) and (3.44) shows the computation needed for SUR, IUR and ILS respectively. The best technique adopted is based on the highest classification rate produced by backpropagation neural network.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmuZmOnAks_FUVauSCpLXChtph94kVRFh8E8DDdnXAuBcdqAj9UiMcMOkiaDniYi-QRYEgQbK1q39iTKoTLeL9vM6za15Wh3GkBH0UtN8fGb0AmpA4mwrU0UMfZzq1duDJGj15sljKdWml/s1600/3_42_44.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmuZmOnAks_FUVauSCpLXChtph94kVRFh8E8DDdnXAuBcdqAj9UiMcMOkiaDniYi-QRYEgQbK1q39iTKoTLeL9vM6za15Wh3GkBH0UtN8fGb0AmpA4mwrU0UMfZzq1duDJGj15sljKdWml/s1600/3_42_44.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;font-family: Verdana, sans-serif; white-space: pre;&quot;&gt;   &lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;With reference to the above equation &amp;nbsp;refers to the new value of feature, &amp;nbsp;in each dimension after the normalization process. Furthermore xmax and xmin refer to the maximum and minimum features value respectively. For the ILS computation, &amp;nbsp;refers to the mean of the feature and &amp;nbsp;is used for the standard deviation of the features in the same dimension. The dimensions for each vector are defined as .&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.7 Cross Validation Setup&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;In order to examine the generalization performance of backpropagation neural network, the cross validation techniques is used. Thus, for this purpose each of the databases had to be divided randomly into 3 groups. First and second groups are the same person or class but only first group is used in the training phase and second group contain unknown face but still the same person as in the first class. The third group is the unknown person and surely different class from first and second group. Second and third class is used as the testing pattern.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The research hypothesis, the second group should give high recognition percentage as the same person but difference variation of lighting, face emotion or pose. However the recognition rate should be low for the third group because it was difference person and the proposed model should classify as unknown.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.8 Summary&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;In this chapter the methodology of proposed system is described to achieve the main objectives to recognize an unknown face image. The mathematical aspects of the Principal Component Analysis (PCA) including the numerical Jacobi’s Method to perform eigenvalues, eigenvectors and neural network are combined with easy explanation. The pseudo code for each methods is also showed for simple implementation and understanding. Three different models have been proposed and explained in this chapter. Figure 3.19 shows the experiments model design conducted in the research. The result and discussion will be discussed in the next chapter. &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUTgKAYEri4kaexsaijMOO2flP0CjWvbI5eglSVFrKl10VVaBOsFQfjuiARENzIxh4VL8hFpqtTJkGd1B8KxS9v74brs0BMneyO5Tz_Ddk7qkL_cCA_oWwCUyotKVySuisr5cwOn85qsBY/s1600/fid_3_19.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUTgKAYEri4kaexsaijMOO2flP0CjWvbI5eglSVFrKl10VVaBOsFQfjuiARENzIxh4VL8hFpqtTJkGd1B8KxS9v74brs0BMneyO5Tz_Ddk7qkL_cCA_oWwCUyotKVySuisr5cwOn85qsBY/s1600/fid_3_19.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.19: Proposed System Design&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/03/biometric-recognition-methodology-part_20.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmuZmOnAks_FUVauSCpLXChtph94kVRFh8E8DDdnXAuBcdqAj9UiMcMOkiaDniYi-QRYEgQbK1q39iTKoTLeL9vM6za15Wh3GkBH0UtN8fGb0AmpA4mwrU0UMfZzq1duDJGj15sljKdWml/s72-c/3_42_44.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-5674971110125659618</guid><pubDate>Sun, 20 Mar 2011 16:55:00 +0000</pubDate><atom:updated>2011-04-12T01:09:02.991-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Feature Extraction</category><category domain="http://www.blogger.com/atom/ns#">Methodology biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Neural Network</category><category domain="http://www.blogger.com/atom/ns#">Neural Network Implementation</category><title>Biometric Recognition Methodology part 3/4 - Artificial Neural Network Implementations</title><description>&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.4 Artificial Neural Network Implementations&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;The capability of neural network to differentiate patterns make the backpropagation neural network is chosen to classify unknown face images. In the thesis, the proposed system implemented the binary sigmoid function in training phase. The binary sigmoid has a normalized range within 0 to 1 which can be described as;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXBkFczum_SS46QyItu3ylEvV-m1fQxX6VfHDTtOe6qz7OezijXlsl6DP7uDv61y_bZcUBjghB8D51X0O9huuc54fotm_qh90HVu0ULMz0bDnOn97u0IurHZJxr2ZZGp0Kgmy3cgOyhlc2/s1600/3_28.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXBkFczum_SS46QyItu3ylEvV-m1fQxX6VfHDTtOe6qz7OezijXlsl6DP7uDv61y_bZcUBjghB8D51X0O9huuc54fotm_qh90HVu0ULMz0bDnOn97u0IurHZJxr2ZZGp0Kgmy3cgOyhlc2/s1600/3_28.jpg&quot; /&gt;&lt;/a&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;(3.28)&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;where c controls the firing angle of the sigmoid.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjwj0GmXrESNpPWNi8R7CPoItqoOnCvjTX_3CKzHyHlZG8btYAqYbxg5t-Dm7ikdP-euCPY8Cq8ul7AvO4_MFMGQtBUdcT3cX_VGIEIdxjDfEBVAigR4T9B1JqSsDbJB6ZI4omkR4mStkK/s1600/fig_3_8.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;264&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhjwj0GmXrESNpPWNi8R7CPoItqoOnCvjTX_3CKzHyHlZG8btYAqYbxg5t-Dm7ikdP-euCPY8Cq8ul7AvO4_MFMGQtBUdcT3cX_VGIEIdxjDfEBVAigR4T9B1JqSsDbJB6ZI4omkR4mStkK/s320/fig_3_8.jpg&quot; width=&quot;320&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;Figure 3.8: The sigmoid activation function with different values of c&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;From figure 3.8 when c is large, the sigmoid becomes like a threshold function and when c is small, the sigmoid becomes more like a straight line (linear). If value of c is large, the learning faster but a lot of information is lost. However, more information is gain although the speed very slow with small amount of c. Because of this function is differentiable, it enables the backpropagation algorithm to adapt the lower layer of weights in a multilayer neural network [Marzuki Khalid, 2005]. This backpropagation algorithm is explained in the following paragraph.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.4.1 Backpropagation Algorithm&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The backpropagation algorithm involves training and testing phase. For the training phase, include this information;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;a)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Obtain set of the training patterns from previous weight feature vectors .&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;b)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Set up number of Input neuron , Hidden neuron &amp;nbsp;and Output neuron .&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;c)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Set learning rate &amp;nbsp;and momentum rate .&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;d)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Initialize all connection &amp;nbsp; with arranged [-0.5, 0.5].&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;e)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Assume the firing angle of the logistic activation function, .&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;f)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Initialize bias weight &amp;nbsp; with value 1 to speed up the convergence process.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;g)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Set minimum error, .&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;h)&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Continue the process from the Figure 3.9.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFSt-TrDvUNpMNLt2Z2U-ZqBT1g70w5xZFgo1O2IA4QrIXLUbUDkd0JgW47eZia0V2z_OzHseFcAVkTICdAsU97ZZDnQB6wf7-sHHgOxL2wrEv5LD5uFCPBj0_fXElbdhfmDNXxqzKaGkE/s1600/fig_3_9.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFSt-TrDvUNpMNLt2Z2U-ZqBT1g70w5xZFgo1O2IA4QrIXLUbUDkd0JgW47eZia0V2z_OzHseFcAVkTICdAsU97ZZDnQB6wf7-sHHgOxL2wrEv5LD5uFCPBj0_fXElbdhfmDNXxqzKaGkE/s1600/fig_3_9.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;Figure 3.9: Diagram for training phase using Backpropagation Neural Network&lt;/span&gt;&lt;/div&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Once the training is finished, the optimal weight &amp;nbsp;and &amp;nbsp;is chosen. A large number of training patterns would make the network better of generalization the optimal weight is then utilize in the testing phase described in the Figure 3.10. The diagram shows how the recognition is performed for the unknown face image. The feature vector of the unknown face image is treated as the input to the neural network.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-NH2tZ429t3eCt-kh2as0C0y37I2EhicKuLTRw1VItVYVXOt8n81dXaMI_5JHEHyYMnP8aofH6l5vv1-rVi1pwD0YJti4gezIXtb-iaX_IdtwBKvE4EJv4N4DPNT9tLaHETLLA7JZh2DA/s1600/fig_3_10.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg-NH2tZ429t3eCt-kh2as0C0y37I2EhicKuLTRw1VItVYVXOt8n81dXaMI_5JHEHyYMnP8aofH6l5vv1-rVi1pwD0YJti4gezIXtb-iaX_IdtwBKvE4EJv4N4DPNT9tLaHETLLA7JZh2DA/s1600/fig_3_10.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.10: Diagram for Testing Phase&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt; During the feed forward, each input neuron receives an input signal. Each input neuron transmits the signal to each hidden neuron (equation 3.29) that applies the activation function (previous equation 3.28) and passes it to each output layer (equation 3.30). Each output layer applies the activation function to obtain the network output (equation 3.31 and 3.32). The network finally compared the target value and the error is obtained (equation 3.33). The algorithm of forward propagation is shown in Figure 3.11.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2TmYAvrXqqnhgNG6vB9EYw4dBvcqLiNxouq2lei8pd8P-ZK8tJ_acA0rWbhZWF9SDHRi59j05T8dYlCcUQ8T48dgAsQCmta0VNUueA7LtN2t-OV4Bya6b2DQzwUtWezWHlICsVTCvfXMA/s1600/fig_3_11.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj2TmYAvrXqqnhgNG6vB9EYw4dBvcqLiNxouq2lei8pd8P-ZK8tJ_acA0rWbhZWF9SDHRi59j05T8dYlCcUQ8T48dgAsQCmta0VNUueA7LtN2t-OV4Bya6b2DQzwUtWezWHlICsVTCvfXMA/s1600/fig_3_11.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;Figure 3.11: Forward Propagation algorithm&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;font-family: Verdana, sans-serif; white-space: pre;&quot;&gt; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;As the training phase involved the calculation and backpropagation of the associated error until the network is generalized acceptable, the previous error (equation 3.33) is obtained. If the error unacceptable, the error signal at output layer (equation 3.34) and input layer (equation 3.35) is determine (Figure 3.12).&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu187UvMzoqzDNRM3YSpywXkkBDaK9WWeuteXdwdQ_IxkbZO8_pdI_-vFLVzHWgPGsiOT8B56qxwfXq169vt8hHqHe9FfvDrUVDVdjC3h2WOwMHEDni1lriBdFIv0idXjb59NooKwpI-9k/s1600/fig_3_12.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhu187UvMzoqzDNRM3YSpywXkkBDaK9WWeuteXdwdQ_IxkbZO8_pdI_-vFLVzHWgPGsiOT8B56qxwfXq169vt8hHqHe9FfvDrUVDVdjC3h2WOwMHEDni1lriBdFIv0idXjb59NooKwpI-9k/s1600/fig_3_12.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.12: Backward propagation Algorithm&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Since the backward propagation is applied, the previous weights are updated simultaneously. The weight between output and hidden layer (equation 3.36 and 3.37) and hidden and input (equation 3.38 and 3.39) are updated using the algorithm in Figure 3.13.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNc4H3SxxDDyHs1JWNN7Bw9L6a3JzZc7IAokOnUImQKMW8rMeuXHOWMPEv8nn4eFjm1ozunqTBPQbAtfVTANZHHojmxKoRg1NFcjfpXkRk02teeRetMRmswz2i7RiKWvfzr7nF5BI3o4ZE/s1600/fig_3_13.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNc4H3SxxDDyHs1JWNN7Bw9L6a3JzZc7IAokOnUImQKMW8rMeuXHOWMPEv8nn4eFjm1ozunqTBPQbAtfVTANZHHojmxKoRg1NFcjfpXkRk02teeRetMRmswz2i7RiKWvfzr7nF5BI3o4ZE/s1600/fig_3_13.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.13: Weight adaptation algorithm&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;In the training phase, the weight and the biases of the network are iteratively adjusted to minimize the network error. The default performance function for feedforward network is mean square error (MSE) - the average squared error between the network output k and the target output t is illustrated as;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_-evM7JUoRlW787DwR7Qb8OHFTaV_lbXlIGFe1SA2tXfeCr0orQwxAO4RC9qCmzyohFrml68qqsSsXq_47Jc_DMVJfWq6Vfyalysckz1wzo05rRQA5dSL90JuinCu56M07DT79Wbf9LvS/s1600/3_40.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_-evM7JUoRlW787DwR7Qb8OHFTaV_lbXlIGFe1SA2tXfeCr0orQwxAO4RC9qCmzyohFrml68qqsSsXq_47Jc_DMVJfWq6Vfyalysckz1wzo05rRQA5dSL90JuinCu56M07DT79Wbf9LvS/s1600/3_40.jpg&quot; /&gt;&lt;/a&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;(3.40)&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.5 Classifier Models&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Classifier is last phase where the feature vectors from the unknown face image will categorize by chosen classifier model either the face is recognized or not. Since the research proposed has three different models, each model also has its own classifier with different criteria. These models are explained in the next paragraph.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.5.1 Model A Classifier&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-tab-span&quot; style=&quot;font-family: Verdana, sans-serif; white-space: pre;&quot;&gt; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Model A is implementing the traditional Euclidean Distance [Turk M, 1991, Atalay I, 1996], thus no training phase is required. The trained feature vectors are keeping in the database corresponding its class or person. Classification is performed by comparing the feature vectors of the feature vectors in the database with the feature vector of the input face image. The comparison is given in equation 3.41.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi68Zfj44QkVcmR_DCUA_WnQmKPkESKZh37aK-UA93vNdGkp8v1HxVh1aF6UxlnPbN4KVJqOACc-s8PzQHl8tyQoW0A9Ku1ndgnUj9aMcrgBn4_fboPmVgCdRjDipBR6ixAxmDr6UjC4Kb6/s1600/3_41.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi68Zfj44QkVcmR_DCUA_WnQmKPkESKZh37aK-UA93vNdGkp8v1HxVh1aF6UxlnPbN4KVJqOACc-s8PzQHl8tyQoW0A9Ku1ndgnUj9aMcrgBn4_fboPmVgCdRjDipBR6ixAxmDr6UjC4Kb6/s1600/3_41.jpg&quot; /&gt;&lt;/a&gt;&amp;nbsp;(3.41)&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;If the comparison falls within the threshold, the input face image is classified as “known” otherwise is classified as “unknown”. Then if the comparison also falls into its corresponding class, is classified as “Recognize”.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.5.2 Model B Classifier&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;The proposed Model B is unlike the previous model which this model is required the training phase. Figure 3.14 shows the Model B architecture [Puteh Saad, 2001] where as mention earlier, the feature vectors are using as input to the network.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8NGw20eBzSki3In6PiZSEc83VJE-rZi6p2CIIs5PbAuCC5pb7_V06yHUuk6sRxbWY0KieRM2HUFXxV_QdDb_2FtgZoNjvuikrfujx7h3DYzBJEbo2XiX7o10gzZc4NCyPhW5ikSMNrW3o/s1600/fig_3_14.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh8NGw20eBzSki3In6PiZSEc83VJE-rZi6p2CIIs5PbAuCC5pb7_V06yHUuk6sRxbWY0KieRM2HUFXxV_QdDb_2FtgZoNjvuikrfujx7h3DYzBJEbo2XiX7o10gzZc4NCyPhW5ikSMNrW3o/s1600/fig_3_14.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;Figure 3.14: Proposed Model B Classifier Architecture&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;As the model B using neural network for classification, the numbers of hidden neurons and output neurons have to identify. Since only single neural network is required, thus this network is requested multiple output neurons to differentiate within each person (Figure 3.15).&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMbXHlRIBrtjjUqL6iL5Zi7fY6jXjY3xOvvHTP1ZHrmYiFb8_BzzfnwKFu7scGCXhOQPuSqTs6YNuS01zKmxCi3hhIb3EjZWQALmSj7q-hq9Peh1AbXPL9rsQVFD5d1f_urZMvyitIDaX7/s1600/fig_3_15.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhMbXHlRIBrtjjUqL6iL5Zi7fY6jXjY3xOvvHTP1ZHrmYiFb8_BzzfnwKFu7scGCXhOQPuSqTs6YNuS01zKmxCi3hhIb3EjZWQALmSj7q-hq9Peh1AbXPL9rsQVFD5d1f_urZMvyitIDaX7/s1600/fig_3_15.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;Figure 3.15: A multiple Output neuron for proposed model B&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The total number of output neurons in Model B is identified by the number of classes used in the training phase. The number of classes utilized in the training is represented by &amp;nbsp;where &amp;nbsp;is the total classes used in the training phase (Figure 3.16). Each class &amp;nbsp; then converted using a set of unique binary to represent its identity for learning and recognition (Figure 3.17).&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLqdCHSY7DP5m6ZtSZm-eVn-JV0jVSZG5dmlCg9Lb_QZSkT5ASzhsz05UplB3E_moBBhpaZTMf9hd2ky16cV_8H00i0zJZxQjXWBq72lhT8MotalL1OyM-Vk7yope76WFq7DTJ5bfRPRdH/s1600/fig_3_16.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLqdCHSY7DP5m6ZtSZm-eVn-JV0jVSZG5dmlCg9Lb_QZSkT5ASzhsz05UplB3E_moBBhpaZTMf9hd2ky16cV_8H00i0zJZxQjXWBq72lhT8MotalL1OyM-Vk7yope76WFq7DTJ5bfRPRdH/s1600/fig_3_16.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;Figure 3.16: &amp;nbsp;Algorithm to generate the number of output node&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJeRg5JwGQdb0bQCDIte0w1b-jm_4aRPBqH5lzAVy0YrQ5Q4AEYVYbIYqb7qZDQa2j_oo9PEY-kvEGHOeInMf40eclzhlPketDp3xlDsomkq2QoULIib6cv-RbZqgS7vJZABcENLn9NcVh/s1600/fig_3_17.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJeRg5JwGQdb0bQCDIte0w1b-jm_4aRPBqH5lzAVy0YrQ5Q4AEYVYbIYqb7qZDQa2j_oo9PEY-kvEGHOeInMf40eclzhlPketDp3xlDsomkq2QoULIib6cv-RbZqgS7vJZABcENLn9NcVh/s1600/fig_3_17.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;Figure 3.17: Algorithm to convert decimal number to binary&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Once the training phase is finished, the optimal weight is obtained. Model B recognizes the unknown face image by feed to the neural network and the nearest answer within the train output target is classified as a class or individual to that unknown face image.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.5.3 Model C Classifier&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Unlike model B, the last approach of model C uses multiple neural networks but single output neuron [Qing Jiang]. &amp;nbsp;The number of neural network is relying with the total number of class affected in training phase. Each neural network (Figure 3.18) has an output neuron which the target is set into 0 or 1.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-tab-span&quot; style=&quot;font-family: Verdana, sans-serif; white-space: pre;&quot;&gt; &lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjM-A0Z4RyPtoIKzakFxm_FIam2Pqe-SdViGrvi-HTQtvQtYDbkqfxWzKesyI6FpbTpfIUzzUB2UAHwdnFJcn9GGX0vzyGQiqrjPgDmHjpNJbIxHBFLqIfOB_8qsgDHgHresutfs0tvFToT/s1600/fig_3_18.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjM-A0Z4RyPtoIKzakFxm_FIam2Pqe-SdViGrvi-HTQtvQtYDbkqfxWzKesyI6FpbTpfIUzzUB2UAHwdnFJcn9GGX0vzyGQiqrjPgDmHjpNJbIxHBFLqIfOB_8qsgDHgHresutfs0tvFToT/s1600/fig_3_18.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;Figure 3.18: A multilayer with one Output neuron for Model C&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;In Model C, the feature vectors are trained separately into its neural network but the number of hidden layer synchronized within each neural network. In every training phase, the trained face images in corresponding class neural network are set as 1 for their output target and others face images are 0.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Model C classify unknown face image with its feature vector to each class neural network. The class neural network which its output neuron gave less than error &amp;nbsp;, is classify as the answer class for that unknown input face image.&lt;/span&gt;&lt;br /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2011/03/biometric-recognition-methodology-part.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXBkFczum_SS46QyItu3ylEvV-m1fQxX6VfHDTtOe6qz7OezijXlsl6DP7uDv61y_bZcUBjghB8D51X0O9huuc54fotm_qh90HVu0ULMz0bDnOn97u0IurHZJxr2ZZGp0Kgmy3cgOyhlc2/s72-c/3_28.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-5539323089575025140</guid><pubDate>Sun, 21 Nov 2010 15:52:00 +0000</pubDate><atom:updated>2011-03-28T08:40:03.518-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Feature Extraction</category><category domain="http://www.blogger.com/atom/ns#">Methodology biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Report Outline</category><title>Biometric Recognition Methodology part 2/4 - Feature Extraction</title><description>&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.3 Feature Extraction &lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: -webkit-auto;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Previously each face image, $\Gamma_i$ of size&amp;nbsp;&amp;nbsp; is converted into a big matrix where each row, &lt;i&gt;M&lt;/i&gt; presented the image and column is &lt;i&gt;P = XY&lt;/i&gt;&amp;nbsp; and revenue difference matrix A with its size &lt;i&gt;(M x P)&lt;/i&gt;.&amp;nbsp; This section (Figure 3.5) described the eigenvalues and eigenvectors using Jacobi’s method, dimension reductions, eigenfaces transformations, features vectors representations and how the eigenfaces is used to rebuild the face images.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF9m6ECZJMrVdGLdAqc8XCUQKF1o89aDrzkgUW1j2q_egCktZdYx3WcB6jld1epy8eQrHb3mdT0VPR-siLP0whUcZSqV_2bLJM1wKRArn1c_-UF95D_hyvqxgu2AYMiJxdfLju3dR5m2Ob/s1600/Diagram+for+Eigenfaces+Formations.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;195&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF9m6ECZJMrVdGLdAqc8XCUQKF1o89aDrzkgUW1j2q_egCktZdYx3WcB6jld1epy8eQrHb3mdT0VPR-siLP0whUcZSqV_2bLJM1wKRArn1c_-UF95D_hyvqxgu2AYMiJxdfLju3dR5m2Ob/s400/Diagram+for+Eigenfaces+Formations.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;Figure 3.5: Diagram for Eigenfaces Formations&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.3.1 Eigenvalues and Eigenvectors implementations&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; In the PCA approach, the eigenvalues and the eigenvectors plays a major role to perform the eigenfaces. Besides, the identification of eigenvalues is pertinent and it is the most challenging aspect for eigenfaces approach [Health, 2002]. This process is initiated by creating the covariance matrix;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $C = A x A^T = \frac{1}{M} \sum_{i=1}^{M}\phi_i\phi^T$&amp;nbsp;&amp;nbsp; (3.4)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;with $A^T$ is a transpose matrix and its size is &lt;i&gt;(M x M)&lt;/i&gt; . The covariance matrix is symmetrical matrix about the main diagonal.&amp;nbsp; This is important property of the eigenfaces method. Imagine for a face image of size &lt;i&gt;XY&lt;/i&gt; pixels, the covariance matrix size is &lt;i&gt;N x N&lt;/i&gt; , where &lt;i&gt;P = XY&lt;/i&gt;. This covariance matrix cause complexity especially the aspects of computational and speed. On the other hand, the proposed eigenfaces method calculates the eigenvectors of&amp;nbsp; &lt;i&gt;M x M&lt;/i&gt; matrix with M is equally the number of images in the training set and obtains&amp;nbsp; &lt;i&gt;N x N&lt;/i&gt; matrix by using the eigenvectors &lt;i&gt;M x M&lt;/i&gt; of&amp;nbsp; matrix.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Since the covariance matrix C is symmetrical, the value above main diagonal is considered since (a,b) = (b,a). The procedure continues until all values above main diagonal is almost zero, $\epsilon$&amp;nbsp;&amp;nbsp;&amp;nbsp; (example $\epsilon = 0.0000001$). The algorithm eigenvalues is shown in Figure 3.6.&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicvDhFE0_NYZXVqdlOlmGazFN5kfmVUO_K1Z4R72zCcyxlCG0gYqXn85ZKdfo9fGlZLF2xfE63NRVVIFDLzdfkiEV9SpWu4p9eZoInHHTm0oAJfMQ0yVXGmZzn3wPDfwxDcxYgzOOZFefS/s1600/Figure+3_6_Eigenvalues+Algorithm.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;640&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEicvDhFE0_NYZXVqdlOlmGazFN5kfmVUO_K1Z4R72zCcyxlCG0gYqXn85ZKdfo9fGlZLF2xfE63NRVVIFDLzdfkiEV9SpWu4p9eZoInHHTm0oAJfMQ0yVXGmZzn3wPDfwxDcxYgzOOZFefS/s640/Figure+3_6_Eigenvalues+Algorithm.jpg&quot; width=&quot;496&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.6: Eigenvalues Algorithm&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Earlier, for each literation to find the eigenvalues, the coordinate &lt;i&gt;(p,q)&lt;/i&gt; and value &lt;i&gt;c ,s&lt;/i&gt; &amp;nbsp;should be set by for later use to find the eigenvectors matrix. Figure 3.7 is performed to find a set of eigenvector.&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG95EZ8ayDRY9E1Lv10x2e_xw7LZxT90ch1BaZDA3eL4IF33S5SR1nQI6W-lbRlKoUKg53y-hBql3F_O7S8JeDwxNYq3z_80TtDqydxkBcZ9YGmXbDFXxTQzjqZezvKQ7juCJxI3pGbqHe/s1600/Figure+3_7_Eigenvector+algorithm.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;457&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhG95EZ8ayDRY9E1Lv10x2e_xw7LZxT90ch1BaZDA3eL4IF33S5SR1nQI6W-lbRlKoUKg53y-hBql3F_O7S8JeDwxNYq3z_80TtDqydxkBcZ9YGmXbDFXxTQzjqZezvKQ7juCJxI3pGbqHe/s640/Figure+3_7_Eigenvector+algorithm.jpg&quot; width=&quot;640&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.7: &amp;nbsp;Eigenvector algorithm&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;From the prior Figure 3.6 and 3.7, the set of eigenvalues and eigenvector matrix with size &amp;nbsp;&lt;i&gt;(M x M)&lt;/i&gt; &amp;nbsp;is obtained as:&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqLwwAkek1M07115c5LnUhIHYDL8-Xa27wpgwK7rNbRs4qPz7enTWLgnF9fSk3hXaApPxlXEuaFsad8_ZRYaHLyeWXf-Nz-I4ahybPebecIQ1qp-9isuHgjlQdUnzGll96jsQT60PNUc9x/s1600/formula_3_14_and_3_15.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;328&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhqLwwAkek1M07115c5LnUhIHYDL8-Xa27wpgwK7rNbRs4qPz7enTWLgnF9fSk3hXaApPxlXEuaFsad8_ZRYaHLyeWXf-Nz-I4ahybPebecIQ1qp-9isuHgjlQdUnzGll96jsQT60PNUc9x/s640/formula_3_14_and_3_15.jpg&quot; width=&quot;640&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Both matrix eigenvalues and eigenvectors are depended with each other where the relationship can be described as;&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNqX_Y6ffalyMcavaOppyv5QE-i-aRO9jidNACuQK5lWAqO4jNQwhnKoUtyD_FnPI1-cdiAHNt7aIUcpl8pjqlgSDWiTF09ceI9Z5LMiaeM5mYv5FJXf_VN-WgPcFzvmnS0bVhv9hHE_16/s1600/eg_ev.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;81&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhNqX_Y6ffalyMcavaOppyv5QE-i-aRO9jidNACuQK5lWAqO4jNQwhnKoUtyD_FnPI1-cdiAHNt7aIUcpl8pjqlgSDWiTF09ceI9Z5LMiaeM5mYv5FJXf_VN-WgPcFzvmnS0bVhv9hHE_16/s640/eg_ev.jpg&quot; width=&quot;640&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;In fact, an eigenvalues turn out with the eigenvectors with the highest eigenvalues of the guidance component of the data set. &amp;nbsp;The eigenvectors with the largest eigenvalues was the one that pointed down in the middle of the data. &amp;nbsp;It is most significant relationship between the data dimensions. &amp;nbsp;Usually it arranges the eigenvalues and eigenvectors from highest to lowest, producing the components in order of significance.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The number of eigenvectors can be reduce into some dimensions with originally the size of eigenvectors is &lt;i&gt;M x M&lt;/i&gt; . &amp;nbsp;If some dimensions have been reduced, the eigenvectors dimension turn into $M x M_t$ which &amp;nbsp;$M_t$ is the new number of columns.&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;This eigenvalues and eigenvectors is an important property for the eigenfaces method. If the equation (3.4) the covariance matrix used is &amp;nbsp;$C = A^t x A$ where $A^t$ size is &lt;i&gt;(P x M)&lt;/i&gt; and A &amp;nbsp;is &lt;i&gt;(M x P)&lt;/i&gt; , the size will be &amp;nbsp;&lt;i&gt;(P x P)&lt;/i&gt; where&lt;i&gt; P = X x Y&lt;/i&gt; . The huge covariance matrix is causing computational and time complexity. Then, the unique matrix transformations allow the size of covariance matrix with &lt;i&gt;(M x M)&lt;/i&gt; , where &lt;i&gt;M&lt;/i&gt; is the number of face images used in the training phase.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.3.2 Eigenfaces Transformations&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Previously, the eigenvectors $V_(mm)$, eigenvalues,$\lambda_(mm)$ , covariance matrix $C_(mm) = A_(mp) &amp;nbsp;x &amp;nbsp;A_(pm)&amp;nbsp;$ are obtained which the matrix size is included in the bracket for additional understanding of matrix transformations policy,&lt;/span&gt;&lt;br /&gt;
&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;$C_(mm) x V_(mm)= \lambda_(mm) &amp;nbsp;x &amp;nbsp;V_(mm)&amp;nbsp;$ &amp;nbsp;(3.16)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;the value of &amp;nbsp;&lt;i&gt;C&lt;/i&gt; is substitutes in this equation,&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;both side are multiplied by &amp;nbsp;&lt;i&gt;A&lt;/i&gt;,&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhckLm_WryBcmgYa2tBBw_M2Rhir60QEQVG5NKGUF38VpMgP9igsqLq7-oddhd00woVEEYYma-5zmXRlnHZPUvh0LFeRRzgb7N7VL-Uuf5CR8sLVjRrJ4l7Ztm5qnBcYCiSlc0_QUXXmxDr/s1600/eq_3%252818%2529.jpg&quot; imageanchor=&quot;1&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;50&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhckLm_WryBcmgYa2tBBw_M2Rhir60QEQVG5NKGUF38VpMgP9igsqLq7-oddhd00woVEEYYma-5zmXRlnHZPUvh0LFeRRzgb7N7VL-Uuf5CR8sLVjRrJ4l7Ztm5qnBcYCiSlc0_QUXXmxDr/s320/eq_3%252818%2529.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;the necessary matrix arrangements are made. As $\lambda_(i)$ is a scalar, this arrangement can be done,&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLmUF-H5gETJ-Dhwt6QiZLEZTdmkyhssJ51HxsP5YlIAJczgihI-OGAcBIz357TYejd8TyES02NGv5i-X1PtYb36_287cEzhTYPcaRiUEakKKi8Y74VFbHQKzEoF9asYheHyLbhOpq2UyK/s1600/eq_3%252819%2529.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;35&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhLmUF-H5gETJ-Dhwt6QiZLEZTdmkyhssJ51HxsP5YlIAJczgihI-OGAcBIz357TYejd8TyES02NGv5i-X1PtYb36_287cEzhTYPcaRiUEakKKi8Y74VFbHQKzEoF9asYheHyLbhOpq2UyK/s400/eq_3%252819%2529.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;group&amp;nbsp;$V_(mm) x&amp;nbsp;A_(mp)$&amp;nbsp;and call a variable &amp;nbsp;$L_(mp) =&amp;nbsp;V_(mm) x&amp;nbsp;A_(mp)$. Next equation showed the&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4k6uG1OgvRRnKbhoFhIYZxVzmjwRe8BWMiwNP5Uiu5qssewwiB3EPAqJxSSWr8aYHcD8riFF4w_ecp7PCoagR8x9gFGLar2ZYi9FyPdyYAMYrxBRW90BMk3r7U2WJTJmTFdmLXQlJ91wV/s1600/eq_3%252820%2529.jpg&quot; imageanchor=&quot;1&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;36&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh4k6uG1OgvRRnKbhoFhIYZxVzmjwRe8BWMiwNP5Uiu5qssewwiB3EPAqJxSSWr8aYHcD8riFF4w_ecp7PCoagR8x9gFGLar2ZYi9FyPdyYAMYrxBRW90BMk3r7U2WJTJmTFdmLXQlJ91wV/s320/eq_3%252820%2529.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;is part of the eigenvector of &amp;nbsp;with its size &amp;nbsp;&lt;i&gt;M x P&lt;/i&gt;.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfX0lh-pC3_IbG1Pp0LdsetlwUi1vp0KdFAAZwf3Tif0-_tENcXDfomWBvzQkVIrKLSQnMz5Vaed5cR1oRJpZZqYQNPp5UC6lh-QXWYOOKUugppztsYVH7tqqogUakSdny05_ULPcm27Tl/s1600/eq_3%252820_1%2529.jpg&quot; imageanchor=&quot;1&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;44&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfX0lh-pC3_IbG1Pp0LdsetlwUi1vp0KdFAAZwf3Tif0-_tENcXDfomWBvzQkVIrKLSQnMz5Vaed5cR1oRJpZZqYQNPp5UC6lh-QXWYOOKUugppztsYVH7tqqogUakSdny05_ULPcm27Tl/s400/eq_3%252820_1%2529.jpg&quot; width=&quot;200&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;To form the eigenfaces from the previous eigenvectors, &lt;i&gt;V&lt;/i&gt; with its size (&lt;i&gt;M x M)&lt;/i&gt;, which $M_t &amp;nbsp;\le M$&amp;nbsp;, new columns after dimension reductions become.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisYrqB9_PJsyJbVOFE8qt06EVVlzpngVhP0FnQAK3qhp9kGs-znT9LzRlxZC46tJFuFpqHYsnFmsMjetBBV7d7IVnFKDeR1fjB-AVhh5bE7PauiB6zj2vLD8M95_4OaZQMUWHayb3kXZze/s1600/eq_3%252820_2%2529.jpg&quot; imageanchor=&quot;1&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;57&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEisYrqB9_PJsyJbVOFE8qt06EVVlzpngVhP0FnQAK3qhp9kGs-znT9LzRlxZC46tJFuFpqHYsnFmsMjetBBV7d7IVnFKDeR1fjB-AVhh5bE7PauiB6zj2vLD8M95_4OaZQMUWHayb3kXZze/s320/eq_3%252820_2%2529.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.3.3 Dimensions Reductions&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The challenge of the proposed system is to reduce the variation between face images is the changes under environmental such as lighting that creates the noise in the images. The implementations of eigenfaces created set of humans face representation mostly like “Ghostly Face” that it is ignored all the noise including lighting and face emotion. However, the noise still can exist in the set of eigenvectors. Thus, the eigenvectors with lower eigenvalues or specific eigenvectors can be reduced because lower eigenvectors only create noise in the image [Yambor WS, 2000] and [Moon H, 2001].&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The last eigenvectors with lower eigenvalues by the amount of variance found between images can be reduced. Here, three variations have been proposed to choose eigenvectors [Yambor WS, 2000]. Firstly, the last 40% of the eigenvectors have been removed [Moon H, 2001]. Then, the second variation (equation 3.25) uses the minimum number of eigenvectors to guarantee that energy &amp;nbsp;&lt;i&gt;e&lt;/i&gt; is greater than threshold (typically 0.9). It is the ratio of the sum of all eigenvalues up to and including &lt;i&gt;i&lt;/i&gt; over the sum of all eigenvalues:&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9ALGyn0RefUKX04OQu8WbsyYGhtN4gZ9qBxP5pw13t2C89gN7FvrHt7l9M5eKj8tsr4xniNgDgbirfCkaOdbmcktNSiqRUtVtKhesY4UFDav-jkBSB0XowR1hxoCZB4_jwioQoPq3AhSx/s1600/eq_3%252821%2529.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9ALGyn0RefUKX04OQu8WbsyYGhtN4gZ9qBxP5pw13t2C89gN7FvrHt7l9M5eKj8tsr4xniNgDgbirfCkaOdbmcktNSiqRUtVtKhesY4UFDav-jkBSB0XowR1hxoCZB4_jwioQoPq3AhSx/s1600/eq_3%252821%2529.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Conversely, it is also possible that the first eigenvectors encode information that is not relevant to identify image, such as lighting (Moon, 2001). Thus, the last variation (equation 3.26) depends upon the stretching dimension. The stretch, &amp;nbsp;for the &amp;nbsp;eigenvectors is the ratio of that eigenvalues over the largest eigenvalue :&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhml5-Rh0JqCwQa2wrDAAvLMIzzij4LU7kSXOcUbvFAZQe1R3PshMUoLGOiGyjAcNcpfAgxpHwtp3_ISF9THqwatgsj9HCChbI6XKaRmArD2GkCM3ckyykkypoxI7sxjeSoYs6SaJgrP-nd/s1600/eq_3%252822%2529.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;63&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhml5-Rh0JqCwQa2wrDAAvLMIzzij4LU7kSXOcUbvFAZQe1R3PshMUoLGOiGyjAcNcpfAgxpHwtp3_ISF9THqwatgsj9HCChbI6XKaRmArD2GkCM3ckyykkypoxI7sxjeSoYs6SaJgrP-nd/s400/eq_3%252822%2529.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;where the eigenvectors with &amp;nbsp;greater than a threshold (0.01) is chosen.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.3.4 Feature Vectors&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;font-family: Verdana, sans-serif; white-space: pre;&quot;&gt; &lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt;  &lt;/span&gt;The set of eigenfaces &amp;nbsp;$U_i$&amp;nbsp;with its size $M_t &amp;nbsp;x P$&amp;nbsp;is used to generate feature vectors for training and unknown face image. For the training phase (with the mean difference face $\phi_i$, &amp;nbsp;with size of &amp;nbsp;&lt;i&gt;P x 1&lt;/i&gt;), consist of face images is present by:&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKPch5qmHXj0w8ewbBnWLhPxzF7Vh0Pa6-ErIHXhAyBE0WOm4msYKha3JaQnzZnRqxq0O_nh6ss7YNq2jmwQcYBU2AxQqzSzlmUA0QV8u4t_ZUMcAVNG0LXf-I93yIg9bppRbpTHylZ2ZM/s1600/eq_3%252823%2529.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;48&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiKPch5qmHXj0w8ewbBnWLhPxzF7Vh0Pa6-ErIHXhAyBE0WOm4msYKha3JaQnzZnRqxq0O_nh6ss7YNq2jmwQcYBU2AxQqzSzlmUA0QV8u4t_ZUMcAVNG0LXf-I93yIg9bppRbpTHylZ2ZM/s400/eq_3%252823%2529.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;with &amp;nbsp;&lt;i&gt;k = 1,2,...m&lt;/i&gt;&amp;nbsp;and &amp;nbsp;&lt;i&gt;i = 1,2,...m&lt;/i&gt;. The weight is the representation of each training face image and its size is&amp;nbsp;$M_t &amp;nbsp;x 1$&amp;nbsp;. Otherwise, for an unknown face image &amp;nbsp; with its size &amp;nbsp;&lt;i&gt;P x 1&lt;/i&gt; to be classify, its weight features is transformed by;&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj95gx1qDSTX9nAOB7jsf50h4ij0Nz8syGZCzz_otxXfzzAXgDajqHzjsdYpomK504ZRfDkqxZG_VMlr8bxzrrOcB0KJlGFzKyOos-kwtVrvCUYF_2nHHlcXmAHmQwGb6ioYJwxcbg47QRm/s1600/eq_3%252824%2529.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;51&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj95gx1qDSTX9nAOB7jsf50h4ij0Nz8syGZCzz_otxXfzzAXgDajqHzjsdYpomK504ZRfDkqxZG_VMlr8bxzrrOcB0KJlGFzKyOos-kwtVrvCUYF_2nHHlcXmAHmQwGb6ioYJwxcbg47QRm/s400/eq_3%252824%2529.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;where the mean face image, $\psi$ is previously have been computed in the training phase with its size &lt;i&gt;P x 1&lt;/i&gt; . The weight of feature vectors is:&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEmzXcBFHPXRKuE8gcAWI0pXiVqfa4TY_fFTTgAXfEvoKLvwVI1RitoUTRWbi1dTLse_Hg-2ItGAKLt81CjVkQjFPAZCMiqEBtEcY1nlmvfLVlzQ_w8Bf7gHrTk984dhcTXLuyT0FBXCDw/s1600/eq_3%252825%2529.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;42&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgEmzXcBFHPXRKuE8gcAWI0pXiVqfa4TY_fFTTgAXfEvoKLvwVI1RitoUTRWbi1dTLse_Hg-2ItGAKLt81CjVkQjFPAZCMiqEBtEcY1nlmvfLVlzQ_w8Bf7gHrTk984dhcTXLuyT0FBXCDw/s320/eq_3%252825%2529.jpg&quot; width=&quot;320&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;the additional for the training phase which include more faces is the size of weight feature vectors are &amp;nbsp; $(M x M_t)$ where the rows, &amp;nbsp;&lt;i&gt;M&lt;/i&gt; present each faces identity, &amp;nbsp;and the column, &amp;nbsp;present the value each weight.&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;Equation (3.25) describes contribution of each chosen eigenfaces in representing the train images and unknown input face image. The feature vectors is then used for the learning phase.&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.3.5 Rebuilding a Face Image&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;font-family: Verdana, sans-serif; white-space: pre;&quot;&gt; &lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;white-space: pre;&quot;&gt; &lt;/span&gt;A face can be approximately reconstructed by using its feature vector and the eigenfaces as:&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiBSM8u5d6tf75NecnTdZP0L89DyKDITA7podMV24vUsPtdySpyod4mj3yPI8DxwWyA6LyXwt6_-O0M_az2r8oceto96G0ZqRWBmklpkUlQ4wKIBDIuV1eAqXjTze6BPUlqaiMTYaC74pQ/s1600/eq_3%252826_27%2529.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;187&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjiBSM8u5d6tf75NecnTdZP0L89DyKDITA7podMV24vUsPtdySpyod4mj3yPI8DxwWyA6LyXwt6_-O0M_az2r8oceto96G0ZqRWBmklpkUlQ4wKIBDIuV1eAqXjTze6BPUlqaiMTYaC74pQ/s640/eq_3%252826_27%2529.jpg&quot; width=&quot;640&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-tab-span&quot; style=&quot;font-family: Verdana, sans-serif; white-space: pre;&quot;&gt;    &lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;is the projected image. The equation (3.26) tells that the face image is rebuilt just by adding each eigenfaces (3.27) and mean face image.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2010/11/biometric-recognition-methodology-part.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgF9m6ECZJMrVdGLdAqc8XCUQKF1o89aDrzkgUW1j2q_egCktZdYx3WcB6jld1epy8eQrHb3mdT0VPR-siLP0whUcZSqV_2bLJM1wKRArn1c_-UF95D_hyvqxgu2AYMiJxdfLju3dR5m2Ob/s72-c/Diagram+for+Eigenfaces+Formations.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-4903493836629302517</guid><pubDate>Sat, 26 Jun 2010 18:02:00 +0000</pubDate><atom:updated>2011-03-28T08:37:42.352-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Methodology biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Report Outline</category><title>Biometric Recognition Methodology part 1/4 - Intro and Preprocessing</title><description>&lt;div style=&quot;text-align: center;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;METHODOLOGY&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.1 Introduction&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;This chapter describes the implementation of the chosen method using the suitable theory. Hence the methodology which described how the difference magnificent mathematical is combined together to achieve the research objectives. There are four (4) phases in the proposed face recognition system namely; Preprocessing, Feature Extraction, Training and Recognition. Each phase is briefly described as follows:&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;a)&amp;nbsp;&amp;nbsp; &amp;nbsp;Preprocessing. In this phase, the face dataset acquisition and the preprocessing of the face images are performed.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;b)&amp;nbsp;&amp;nbsp; &amp;nbsp;Feature extraction. Those face library images were prepared for the feature extraction phase. This phase is performed to find the useful feature such as eigenvalues $(\lambda_i )$, eigenvectors$(\ev_i )$ , eigenfaces$(\U_i )$&amp;nbsp; and feature vectors(\Omega) . &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;c)&amp;nbsp;&amp;nbsp; &amp;nbsp;Training phase. Trained feature vectors then used for backpropagation neural network training to generalized the neural network weights for recognition phase.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;d)&amp;nbsp;&amp;nbsp; &amp;nbsp;Recognition phase. The set of chosen eigenfaces, feature vectors and neural network weights is then used for recognition phase. The recognition begins by selecting a face image from face library which the system considered the unknown face. &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.1 illustrates the methodology used to recognize an unknown human face. The figure clearly shows where the four phases is located. For the training and recognition, three (3) models are purposed in the research. Each these phase is then described with their algorithm in this chapter.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyCKUu1WA6zUpxU1xldK2Mh6fHhX6Vq9eV0ZO3tBCm_mDGHAT7IT5Cvv5eWOocVI5wRdOvMAbETdCSbye323Z0dPF-QQso0uD8C3syo8ocOkp9LyClYqvm9HvmIKJBx7SaDqeory08NN_h/s1600/biometric+recognition+model+system.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;246&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyCKUu1WA6zUpxU1xldK2Mh6fHhX6Vq9eV0ZO3tBCm_mDGHAT7IT5Cvv5eWOocVI5wRdOvMAbETdCSbye323Z0dPF-QQso0uD8C3syo8ocOkp9LyClYqvm9HvmIKJBx7SaDqeory08NN_h/s400/biometric+recognition+model+system.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.1: Proposed modeling System&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.2 Preprocessing&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; This section is to prepare the face image ready for the feature extraction phase. There are three (3) primaries were done namely; face dataset acquisition, changing format, face library phase and training set acquisition (Figure 3.2). Each phase is described in the following paragraph.&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMHXbTAXiCysLqHVqSNcjA9XLCTcprRwWTyECml0GCL1PNF4J0ZReMuIkOT7cI2wXebt3BkR59oh4h9HWFovCnPIPJjTpd-wqkZAwj-2fmmnqs8UcCSoe9i-h47q2y9D5goP-SECCVA4qS/s1600/Diagram+for+Preprocessing+Technique.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;118&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMHXbTAXiCysLqHVqSNcjA9XLCTcprRwWTyECml0GCL1PNF4J0ZReMuIkOT7cI2wXebt3BkR59oh4h9HWFovCnPIPJjTpd-wqkZAwj-2fmmnqs8UcCSoe9i-h47q2y9D5goP-SECCVA4qS/s400/Diagram+for+Preprocessing+Technique.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.2: Diagram for Preprocessing Technique&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.2.1 Face Dataset Acquisition&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; The face dataset were collected via internet sources that involved the Olivetti Research Laboratory (ORL). The ORL dataset include 10 different images of 40 distinct individuals. The database included with grayscale face database with contains images with different varying lighting slightly and facial expressions where can represent the environment changes for real time purposed. However due the limitation of the available computational capacity, the experiments took a sample of 150 face images – ten (10) face images of 15 persons.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; All face images are stored in a face dataset library in the system. Every action such as training phase, eigenfaces is performed from the face library. After the acquisition and preprocessing, face is added to the face library with its weight vectors. Eigenfaces weight vectors of each image are empty until a training set is chosen and eigenfaces is produced.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.2.2 Change Format&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; The face dataset should be prepared for feature extraction phase. The ORL Face Database was manually altering the file format into Portable Grey Map (PGM) using suitable image processing software. The PGM format is chosen because it is a lowest common denominator grayscale image file format.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.2.3 Face Library Formation Phase&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; For convenience, each face datasets were place in a different folder name with the name of dataset. Each folder contain subfolders with each subfolder present the different individuals. Then, each subfolder had given the started name with symbol “s” and followed by the number 1 until last individuals.&amp;nbsp; The face images are numbered with 1 until the last face images for each individual and placed in each subfolder.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; The ORL Face Database, the appereances is not synchronized within the number (from 1 to 10) with others individuals. Each individual or class some of the images were taken at different times, varying lighting slighty, facial expressions (open/clased eyes, smiling/non-smiling) and facial details (glasses/ no-glasses). These face images a taken against a dark homogeneous background and the individuals are in up-right, frontal position with tolerance for side movement. For the original ORL face image (Figure 3.3), the size is 92 x 112. The face images also resizes into 41 x 50 and&amp;nbsp; 20 x 24 to compute the PCA and neural network capabililities.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSEpvW81VkcVTK037m5OzdFYGT1jWVBm6JErW5cyuR5k-6BPTQYiLB6a26ISQo_e0g8OSUVof9d1H5t0diJbVkd-Or_EkuKwksQNZSuQa5bKh7AqbjjZ8n6eUfAFILuDCz05eRxh0q3cjj/s1600/ORL+Face+dataset+with+number+formation+phase.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjSEpvW81VkcVTK037m5OzdFYGT1jWVBm6JErW5cyuR5k-6BPTQYiLB6a26ISQo_e0g8OSUVof9d1H5t0diJbVkd-Or_EkuKwksQNZSuQa5bKh7AqbjjZ8n6eUfAFILuDCz05eRxh0q3cjj/s320/ORL+Face+dataset+with+number+formation+phase.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.3: ORL Face dataset with number formation phase&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;3.2.4 Training Set Acquisitions&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Generally the entire process is done by using the matrix operations such as addition, subtractions, and multiplications. Matrix operation is chosen because due to potential that made powerful tools for arithmetic solution. In addition the understanding of matrix transformations is required for successfully implement those methods.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;In the proposed system, the process is started by gathering the face images into one big matrix. Imagine the first face image with size XY, M. Turk converted this image into one vectors with the columns size is P = XY and row is the identity of face image. This process continued by others face images that are used in the training phase. Figure 3.4 illustrated how a set of face images is converted into a matrix.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_OuukQrhR6a9vQc4RxznAcno46l_aR2NUf7lzDsJagmoiTN2hT3Ir7PbS-6CwGYgiZlhd5_P5uhGELAei0vAakID2eeQdvQUdTeYc98e9I3Z8uzO5FQQC_9jviiunHut8Sxe-NBalwI6C/s1600/Training+set+acquisitions.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;237&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh_OuukQrhR6a9vQc4RxznAcno46l_aR2NUf7lzDsJagmoiTN2hT3Ir7PbS-6CwGYgiZlhd5_P5uhGELAei0vAakID2eeQdvQUdTeYc98e9I3Z8uzO5FQQC_9jviiunHut8Sxe-NBalwI6C/s400/Training+set+acquisitions.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 3.4: Training set acquisitions&lt;/span&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The face acquisitions is simply described in mathematical equations where for each training set, $\Gamma_1, \Gamma_2, \ldots, \Gamma_m$&amp;nbsp; calculated the average of set with its size is &lt;i&gt;(1 x P)&lt;/i&gt;;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $\psi = \frac{1}{M} \sum_{n=1}^{M}\Gamma_n$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (3.1)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Each face differs from the average by the vectors and its size is&amp;nbsp; &lt;i&gt;(1 x P)&lt;/i&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $\Phi = \Gamma_i - \psi$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (3.2) &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Complete the training set acquisitions with the updated difference matrix with its size &lt;i&gt;(M x P)&lt;/i&gt;;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&amp;nbsp;&amp;nbsp; $A = (\Phi_1, \Phi_1, \ldots,\Phi_M)$ &amp;nbsp; (3.3)&lt;/span&gt;&lt;/div&gt;</description><link>http://biometric-recognition.blogspot.com/2010/06/biometric-recognition-methodology-part.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyCKUu1WA6zUpxU1xldK2Mh6fHhX6Vq9eV0ZO3tBCm_mDGHAT7IT5Cvv5eWOocVI5wRdOvMAbETdCSbye323Z0dPF-QQso0uD8C3syo8ocOkp9LyClYqvm9HvmIKJBx7SaDqeory08NN_h/s72-c/biometric+recognition+model+system.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-4305638912295310408</guid><pubDate>Fri, 28 May 2010 07:55:00 +0000</pubDate><atom:updated>2011-03-28T08:32:41.028-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">Backpropagation Neural Network</category><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">literature review</category><category domain="http://www.blogger.com/atom/ns#">Neural Network</category><title>LITERATURE REVIEW PART 3/3 - Artificial Neural Network</title><description>&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;2.4 Artificial Neural Network&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Artificial Neural Network (ANNs) has a large appeal to many AI researchers. A neural network can be defined as model of reasoning based on the human brain. The brain consists of a closely interconnected set of nerve cells or basic information-preprocessing units, called neurons. The human brain incorporates nearly 10 billion neurons and 60 trillion connections, synapses between them [Shepherd, 1990]. By using multiple neurons simultaneously, the brain can perform its functions much faster than the fastest computers in existence today [Negnevitsky, 2002].&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
2.4.1 Architecture&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;A multilayer perceptron is a feed-forward neural network with one or more hidden layers. Typically, the network consists of an input layer of source neurons that at least one hidden layer of neurons and an output layer of neurons (Figure 2.3). The input signals are propagated in a forward direction on a layer-by-layer basis. The backpropagation algorithm perhaps is the most popular and widely used neural paradigm. It based on the generalized delta rule proposed by research group in 1985 headed by Dave Rumelhart based at Stanford University, California, USA.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhocvjlKc2vhxxOhRTfmGGltGxpq4BF5jzwfN2D7mJgwkDRdpoQvoFOJkJxjtJ4uVT494pgaeYTvEsUFVw68W0dE-7nEuYKpoVjjSCfTM0CKEU2e4Up5LtNDBz8rYshiT9Jr0IXnT85Kpuo/s1600/Neural+network.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;233&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhocvjlKc2vhxxOhRTfmGGltGxpq4BF5jzwfN2D7mJgwkDRdpoQvoFOJkJxjtJ4uVT494pgaeYTvEsUFVw68W0dE-7nEuYKpoVjjSCfTM0CKEU2e4Up5LtNDBz8rYshiT9Jr0IXnT85Kpuo/s400/Neural+network.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 2.3: Feed-forward Neural Network&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Before the network can be used, it requires target patterns or signals as it a supervised learning algorithm. Training patterns are obtained from the samples of the types of inputs to be given to the backpropagation neural network and their targets are identified by the researchers. The objective of the algorithm is to find the next value of adaptation weight which is also known the Generalized Delta Rule (G.D.R).&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The hidden layer weights are adjusted using the errors from the subsequent layer. Thus, the errors computed at the output layer are used to adjust the weight between the last hidden and the output layer. Likewise, an error value computed from the last hidden layer outputs are used to adjust the weight in the next to the last hidden layer and so on until the weight connections to the first hidden layer are adjusted. In this way, errors are propagated backwards layer by layer with corrections being made to the corresponding layer weights in an iterative manner. The process is repeated a number of times for each pattern in the training set until the total error converges to a minimum or until some limit is reached in the number of training iterations completed [Patterson, 1999].&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;2.4.2 The Activation Function&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; The activation function has the characteristics of continuity, differentiability and non-decreasing uniformity. There is several activation functions used in neural network. There is several activation functions used in the neural network. Binary sigmoid and bipolar sigmoid are generally used in the neural network training. The binary sigmoid which has a normalized range within 0 and 1 and bipolar sigmoid is normalized within -1 to +1 are used in backpropagation training.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
2.5 Summary&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Face recognition is a challenging ordeal, many contributions are made and many variations of approaches have been made. This work is focus on using Principal Component Analysis (PCA) to extract patterns and backpropagation neural network to recognize biometric recognition for an unknown face image.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; This chapter reports the previous work that used PCA to extract patterns from face images. Since the image also in a matrix form, PCA which a proven methods in dimensional set of data is still continuous used to find better performance. Jacobi’s method is chosen since its capability to find a set of eigenvectors and eigenvalues is better among other method. Hence the combination of PCA and neural network is chosen as backpropagation neural network is good for classification task.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Therefore, the next chapter will discuss the methodology on how these chosen methods are applied in this work.&lt;/span&gt;</description><link>http://biometric-recognition.blogspot.com/2010/05/literature-review-part-33.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhocvjlKc2vhxxOhRTfmGGltGxpq4BF5jzwfN2D7mJgwkDRdpoQvoFOJkJxjtJ4uVT494pgaeYTvEsUFVw68W0dE-7nEuYKpoVjjSCfTM0CKEU2e4Up5LtNDBz8rYshiT9Jr0IXnT85Kpuo/s72-c/Neural+network.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-2548366844864620263</guid><pubDate>Fri, 28 May 2010 07:52:00 +0000</pubDate><atom:updated>2011-03-28T08:27:00.591-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Eigenvalue</category><category domain="http://www.blogger.com/atom/ns#">Eigenvector</category><category domain="http://www.blogger.com/atom/ns#">literature review</category><category domain="http://www.blogger.com/atom/ns#">Principal Component Analysis</category><title>LITERATURE REVIEW PART 2/3 - Principal Component Analysis Approach</title><description>&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;2.3 Principal Component Analysis Approach&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Principal Component Analysis (PCA) is also known as Karhunen-Loeve transformation or eigenspace projection. It is a well known statistical technique to identify patterns in data. It highlight similarities and differences between patterns. Since patterns can be hard to find in data of high dimension, where the luxury of graphical representation is not available, PCA is a powerful tool to extract patterns. The other main advantage of PCA is that once the pattern is found in the data and the data is compressed where the number of dimensions is reduced, however less information is lost [Smith, 2002]. &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;PCA can be easier illustrated from Figure 2.2(b). it is show how quite strong pattern data is produced. The covariance matrix made both variables increase together. On the top of the graph, both eigenvectors are plotted as well that appear as diagonal lines on the plot. As eigenvectors, their appearances are perpendicular to each other. Most importantly, their values provide important information about patterns of the data. The straight lines through the middles of the point shows where the eigenvectors goes like drawing a line of the best fit. This is also illustrated how these two (2) sets of data are related along the line (Figure 2.2(a).). However, for second eigenvector it gives less important that all the points follow the main line but off to the side of the main line by some amount. Thus, the eigenvectors of the covariance matrix, represented as a line.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1lka1m6RVt2Y2nsSHteG_vOl3pksPDxgd_0um1UTW2jVrna_JiLF_YdU1hJSTOP1RuL3yjHnBetkffG2INRfzzXHVkE7lYawa-9-55rcQ9-DCzDKl5ERuQu6mN6CQrSdCWAosHLZd_ZDJ/s1600/PCA.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;217&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1lka1m6RVt2Y2nsSHteG_vOl3pksPDxgd_0um1UTW2jVrna_JiLF_YdU1hJSTOP1RuL3yjHnBetkffG2INRfzzXHVkE7lYawa-9-55rcQ9-DCzDKl5ERuQu6mN6CQrSdCWAosHLZd_ZDJ/s400/PCA.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgI9UR7LkSLbRMI_T0CDOsk6pfkE1ibv5o6yptKbdbHTdVVnBmIuFyVVHK8lS-uIPzxmoQnwZpCoYSCfsfh6WVNtUP5PmhqEIGweiNZfC_WXulzxojMeJJA9ZCW7O-GEOtlA3FfhnbBHcKs/s1600/PCA2.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;226&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgI9UR7LkSLbRMI_T0CDOsk6pfkE1ibv5o6yptKbdbHTdVVnBmIuFyVVHK8lS-uIPzxmoQnwZpCoYSCfsfh6WVNtUP5PmhqEIGweiNZfC_WXulzxojMeJJA9ZCW7O-GEOtlA3FfhnbBHcKs/s400/PCA2.jpg&quot; width=&quot;400&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 2.2: Example of PCA reconstruction [Smith, 2002]&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Once the eigenvectors are created, the feature vectors can be represented by combining it with the original data. Figure 2.2(c) shows the new plot of data that using both eigenvectors. This plot is basically the rotation of original data by the axes. The other transformation take only the eigenvectors with largest eigenvalues. In this example, it is only single dimension Figure 2.2(d). This set of data is exactly same with the one resulting using both eigenvectors. Basically PCA transformed the original data so that is expressed in terms of the patterns between them, where the patterns are the lines that most closely describe the relationship between the data. This is useful because new data points are combinations from each of those lines. The original x and y points did not give exactly how the both points relate to the rest of the data. With new values using the both eigenvectors, the data is altered from usual axes. By using the eigenvectors associated with highest eigenvalues, it has removed the contribution due to the smaller eigenvector and left the data in terms of the other [Smith, 2002].&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;2.3.1 Eigenvalues and Eigenvectors Problems&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; In the previous section, it is noticed that the eigenvectors and eigenvalues are the most important criteria to perform the PCA. The approach of using eigenvalues and eigenvectors commonly find in linear transformations where this approach enables the structural engineer to determine the stability of a structure or a numerical analyst to establish the convergence of an iterative algorithm [Health, 2002].&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Besides, an eigenvector of a matrix determines a direction in which the effect of the matrix is particularly simple: the matrix expands or shrinks any vector lying in that direction by a scalar multiple and the expansion or contraction factor is given by the corresponding eigenvalues, $\lambda$. Thus, the eigenvalues and eigenvectors provide a means of understanding the complicated behavior of a general transformation by decomposing it into simpler actions.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The conventional method to find the eigenvalues for an&amp;nbsp;&amp;nbsp; matrix A is said to have corresponding eigenvalues,&amp;nbsp; $\lambda$ if;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $ax =\lambda x$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.1)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;evidently, from equations (3.5);&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $(A-\lambda I)v = 0$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.2)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;if&amp;nbsp; $v\ne0$ ,&amp;nbsp;&amp;nbsp; $det |A-\lambda I|=0$ which if expanded out is an Nth degree polynomial in&amp;nbsp; $\lambda$ whose root are the eigenvalues. This determinant is easy to solve only if the dimension of matrix and matrix values is small [AB Rahman, 2002]. Hence, the proposed is implemented the numerical method of Jacobi’s to resolve the problem.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Jacobi’s method is an easily understood and more reliable [Health M.T, 2002] algorithm for finding all eigen pairs for a symmetric matrix. It is the method is reliable and it produces uniformly accurate answers for the results [Demmel J., 1989].&amp;nbsp; A solution is guaranteed for all real symmetric matrices by using this method. The limitation is not severe since many practical problems of applied mathematics and engineering involve symmetric matrices. From a theoretical viewpoint, the method embodies techniques that are found in more sophisticated algorithms [Matthews J.H].&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;2.3.2 The Jacobi’s series Transformations&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Start with the real symmetric matrix A. Then construct the sequence of orthogonal matrices&amp;nbsp; as follows [Matthew J.H]:&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $D_0 = A, D_j = R_j D_j-1R_j for j = 1,2,3 ...$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.3)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The sequence $R_j$ is constructing so:&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $\mathop{\lim}\limits_{a \to \infty} D_j = D = diag(\lambda_1, \lambda_2, ...., \lambda_n)$ (2.4)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;In practice, the process stops when all off-diagonal elements are close to the zero. Then&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; D_n \approx D&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.5)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The construction produces&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $D_n =&amp;nbsp; R&#39;_n R&#39;_(n-1)... R&#39;_1AR_1R_2....R_(n-1)R_n$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.6)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;If the equation is defined by&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $R = R-1R_2 ... R_(n-1)R_n$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.7)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Then $R^(-1)AR = D_k$, which implies that&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $AR = RD_k \approx R diag(\lambda_1, \lambda_2, \ldots, \lambda_n)$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.8)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Let the column of R be denoted by the vectors . Then R can be expressed as a row vector $X_1, X_2 .... X_n$ of column vectors:&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $R = [X_1, X_2 .... X_n]$&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (2.9)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The columns of the product in (2.8) now take on the form&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; $[AX_1 AX_2 ... AX_3] \approx [\lambda_1X_1, \lambda_2X_2 ..., \lambda_nX_n] $&amp;nbsp;&amp;nbsp; (2.10)&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;From (2.9) and (2.10) the vector $X_j$, which is the jth column of R, is an eigenvector that corresponds to the eigenvalues $\lambda_j$.&lt;/span&gt;</description><link>http://biometric-recognition.blogspot.com/2010/05/literature-review-part-23.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1lka1m6RVt2Y2nsSHteG_vOl3pksPDxgd_0um1UTW2jVrna_JiLF_YdU1hJSTOP1RuL3yjHnBetkffG2INRfzzXHVkE7lYawa-9-55rcQ9-DCzDKl5ERuQu6mN6CQrSdCWAosHLZd_ZDJ/s72-c/PCA.jpg" height="72" width="72"/><thr:total>0</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-7108416235279938519</guid><pubDate>Fri, 28 May 2010 07:42:00 +0000</pubDate><atom:updated>2011-03-28T08:25:42.471-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">face recognition</category><category domain="http://www.blogger.com/atom/ns#">literature review</category><title>LITERATURE REVIEW PART 1/3 - Biometric Recognition of Human Face Background</title><description>&lt;div style=&quot;text-align: center;&quot;&gt;&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;CHAPTER 2&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;LITERATURE REVIEW&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;b&gt;2.1 Introduction&amp;nbsp;&lt;/b&gt;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; The human &lt;b&gt;biometric recognition&lt;/b&gt; of human face is a popular research topic in computer vision. Its motivation arises in commercial security system. Despite the fact that other &lt;b&gt;biometric recognition&lt;/b&gt; identification methods such as fingerprints and iris scans may more accurate, &lt;b&gt;biometric recognition&lt;/b&gt; of human face has always been a major research focus because it is noninvasive and it is natural and intuitive to users.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;As the &lt;b&gt;biometric recognition&lt;/b&gt; of human face is an application in computer vision, hence the standard methodology of the biometric recognition shown in Figure 2.1. In the preprocessing phase, the unwanted noise or irrelevant data is eliminated from the image. Others preprocessing steps include spatial quantization (reducing the number of bits per pixel) or finding regions of interest. The second stage involves transforming the image data into another domain to extract the significant features. Lastly, the extracts features are examined and evaluated&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixV9LrG7N8CKSbgCKDD_L3lM8nkGDpmz_2HSpoHnqDrNgZhYR1U3IRo_rwA9q2QmsJdYXReiTZPL5Hq-EWbl4gJpn5B3VJy54BSNrWUUklMwb8qW-BpZ82dxVNK3tah4UT2u5PDOAxwOEQ/s1600/Biometric+Recognition+model.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;img border=&quot;0&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixV9LrG7N8CKSbgCKDD_L3lM8nkGDpmz_2HSpoHnqDrNgZhYR1U3IRo_rwA9q2QmsJdYXReiTZPL5Hq-EWbl4gJpn5B3VJy54BSNrWUUklMwb8qW-BpZ82dxVNK3tah4UT2u5PDOAxwOEQ/s320/Biometric+Recognition+model.jpg&quot; /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: center;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Figure 2.1: Standard Image Analysis Model&lt;/span&gt;&lt;/div&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &amp;nbsp;An understanding of the process involved in the existing &lt;b&gt;face recognition &lt;/b&gt;system give clues to the construction of biometric recognition system. Thus this chapter reviews the relevant literature in &lt;b&gt;face recognition&lt;/b&gt;, Principal Component Analysis (PCA) as data reduction method and Neural Network approach to recognize an unknown face image.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;b&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;2.2 Biometric Recognition of Human Face Background&lt;/span&gt;&lt;/b&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&amp;nbsp;&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The intuitive way to recognize face is to extract the major features from a face and compare them with the same features on other faces. Thus majority of contribution made in &lt;b&gt;biometric recognition&lt;/b&gt; of human face&amp;nbsp; is focused on detecting the prominent features such as the eyes, nose, mouth and head outline. The recognition is considered success if there are relationships among the features.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;[Brunelli, 1993] used template matching for &lt;b&gt;biometric recognition&lt;/b&gt; of human face. The algorithm prepares a set of four masks representing eyes, nose, mouth and face for each registered person. To identify the unknown person in the image, the algorithm first detects eyes using template matching and then normalizes position, scale and rotation of the face in the image plane using the detected eye position. Next, for each person in the database, the algorithm places his four masks on their positions relative to eye position and computes the cross-correlation values between the four masks and the image. The unknown person in the image is classified as the person giving the greatest sum of the cross-correlation values of the four masks. This basic idea also used by [Doi, 1998] to propose a biometric recognition of face identification system for automatic lock control. The difference between the two (2) methods is that a new template matching method is proposed that was robust to lighting fluctuation.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Also [Sato, 1998] used neural network instead of template matching to recognize a face. In the neural network, output units correspond to registered persons and input units correspond to pixels of the input image. Sato et al. trained the neural network using three face templates for each person. In the biometric recognition of human face phase, the neural network computes an output vector from each test image. And, the unknown person in the image is classified as the person corresponding to the output unit that has the maximum value of the output vector if the maximum value is greater than a threshold value.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;[Kawaguchi, 2000] proposed a new algorithm to detect the irises of both eyes from human face in an intensity image. They implemented the separability filter and Hough Transform to measure the fit of the pair of blobs to the face image. The algorithm then selects a pair of blobs with the smallest cost as the irises of both eyes.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The first introduction of a low-dimensional characterization of faces was developed by Kirby and Sirovich in 1987 and 1990. Turk in 1991 used eigenspace method instead of template matching. This method constructs an eigenspace for each registered person using sample face images. In the biometric recognition of human face phase, the test image is projected onto the eigenspaces of all registered persons to compute the matching errors. And, the unknown person in the image is classified as the person corresponding to the eigenspace giving the smallest matching error. It was reported that the eigenspace method was relatively robust to variations in position, scale and pose of the face if the eigenspace of each person was constructed by using face images with different positions, scales and poses.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;These eigenfaces remain the topic of practical importance and interest from researcher to find the best performance. [Yambor, 2000] is analyzed PCA into automated eigenvector selection. He studied the combination of traditional distance measures to improve performance in the matching stage of face recognition.&amp;nbsp; [Moon, 2001] investigates PCA using FERET database to examine the eigenfaces performance through the changing illumination, compression algorithm, varying the number of eigenvector and changing the similarity in the classification process. The automated eigenvectors selection or dimension reduction technique is a quite popular [Chichizola, 2005] proposed a new algorithm known as Reduced Image Eigenfaces (RIE) to improve the recognition rate.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;These life of research is still continuous with [Yang, 2000] have demonstrated the successful result in face recognition, detection and tracking with represent the PCA in second order statistics of the face image. The eigenfaces approach is also used by [Watta, 2000] to analyse facial video data of an automobile when subjects driving in the vehicles. [Aravind, 2002] combined the eigenfaces with applied preprocessing technique of mean filtering, back ground elimination and local enhancement filter that showed a good recognition rate. [Lemieux, 2002] and [Ibrahim, 2004] also implement several image processing such as segmentation, deskewing, zooming, rotation and warping to observe the eigenfaces capability.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The capability of neural network in pattern classification enable it to be chosen in &lt;b&gt;face recognition&lt;/b&gt; experiments. [Ahmad fadzil, 1994] develop of a biometric recognition of human face system (HFRS) using multilayer perceptron artificial neural network (MLP) and [Debipersad, 1997] using a discrete cosine transform (DCT) and neural network to recognize an unknown face. [Thomaz, 1998] combined the eigenfaces and Radial Basis Function Network (RBF) as a classifier in biometric recognition of human face system. It is also implemented by [Nazish, 2001] but used the backpropagation neural network.&lt;/span&gt;</description><link>http://biometric-recognition.blogspot.com/2010/05/literature-review-part-13.html</link><author>noreply@blogger.com (Firdaus)</author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixV9LrG7N8CKSbgCKDD_L3lM8nkGDpmz_2HSpoHnqDrNgZhYR1U3IRo_rwA9q2QmsJdYXReiTZPL5Hq-EWbl4gJpn5B3VJy54BSNrWUUklMwb8qW-BpZ82dxVNK3tah4UT2u5PDOAxwOEQ/s72-c/Biometric+Recognition+model.jpg" height="72" width="72"/><thr:total>1</thr:total></item><item><guid isPermaLink="false">tag:blogger.com,1999:blog-4267025812143527462.post-878080260785812876</guid><pubDate>Wed, 28 Apr 2010 15:30:00 +0000</pubDate><atom:updated>2011-04-12T01:10:20.860-07:00</atom:updated><category domain="http://www.blogger.com/atom/ns#">biometric recognition</category><category domain="http://www.blogger.com/atom/ns#">Introduction Biometric Recognition</category><title>INTRODUCTION OF FACE BIOMETRIC RECOGNITION</title><description>&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Machine &lt;b&gt;biometric recognition&lt;/b&gt; of human faces is a challenging problem due to the changes in the face identity and variation between images of the same face due to illumination and viewing direction. The issues are how the features are adopted to represent a face under environmental changes and how the classification is done to a new face image based on the chosen representations.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Principal Component Analysis (PCA) which also known as eigenfaces is used in this research to extract set of feature extraction of the faces. The reason that it is chosen due to its capability to extract the relevant information from high dimensional matrix [Turk, 1991]. As for the classification task, the euclidean distance and backpropagation neural network is chosen since most researchers show superior performance claimed that both methods.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;A set of features from a face image is representation using the eigenvalues and eigenvectors. In order to obtain eigenvalues and eigenvectors, Jacobi’s method is used due its accuracy and robustness. As for the classification task, backpropagation algorithm is applied.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
&lt;a name=&#39;more&#39;&gt;&lt;/a&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;b&gt;1.2&amp;nbsp;&amp;nbsp;&amp;nbsp; Problem Statement&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The ability of human to recognize of thousands of faces is remarkable. This skill is impressive even with large changes of human faces such as aging, hairstyle, and expressions. Moreover with environmental changes of lighting, distractions (glasses, face scar) and changes of human skin color make the developing the computational model of face &lt;b&gt;biometric recognition&lt;/b&gt; is a challenging. Thus, the proposes superior methods to develop the human face &lt;b&gt;biometric recognition&lt;/b&gt; prototype.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;b&gt;1.3&amp;nbsp;&amp;nbsp;&amp;nbsp; Objective&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;The fundamental objectives of this work are:&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;a.&amp;nbsp;&amp;nbsp;&amp;nbsp; To study and implement the Principal Component Analysis (PCA) in order to extract the features of face images.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;b.&amp;nbsp;&amp;nbsp;&amp;nbsp; To study and implement the optimization technique in order to obtain eigenvalues and eigenvectors.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;c.&amp;nbsp;&amp;nbsp;&amp;nbsp; To study and implement the backpropagation neural network for classification.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;d.&amp;nbsp;&amp;nbsp;&amp;nbsp; To propose three (3) face &lt;b&gt;biometric recognition&lt;/b&gt; models based on PCA and backpropagation neural network.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;e.&amp;nbsp;&amp;nbsp;&amp;nbsp; To evaluate the performance of those models using known face database images downloaded via internet.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;f.&amp;nbsp;&amp;nbsp;&amp;nbsp; To develop a prototype of human face &lt;b&gt;biometric recognition&lt;/b&gt; system.&amp;nbsp; &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;b&gt;1.4&amp;nbsp;&amp;nbsp;&amp;nbsp; Scope&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;Some limitations of the research are described below as:&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;a.&amp;nbsp;&amp;nbsp;&amp;nbsp; This research only consists grey face image with portable grey map (PGM) image format.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;b.&amp;nbsp;&amp;nbsp;&amp;nbsp; Since unavailable of camera support, the experiments is tested with the ORL face database which downloaded from the internet.&lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;c.&amp;nbsp;&amp;nbsp;&amp;nbsp; Due to the superior capability of PCA, preprocessing is not required. &lt;/span&gt;&lt;br /&gt;
&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: Verdana, sans-serif;&quot;&gt;d.&amp;nbsp;&amp;nbsp;&amp;nbsp; Due to complexity of computational and hardware requirement, only 15 persons are tested.&lt;/span&gt;</description><link>http://biometric-recognition.blogspot.com/2010/04/biometric-recognition.html</link><author>noreply@blogger.com (Firdaus)</author><thr:total>0</thr:total></item></channel></rss>