Optimization Viewpoints
http://www.redcedaru.com/blog
enOptimizationViewpointshttps://feedburner.google.comSubscribe with My Yahoo!Subscribe with NewsGatorSubscribe with BloglinesSubscribe with GoogleSubscribe with The Free DictionarySubscribe with Live.comSubscribe with Excite MIXSubscribe with Podcast ReadySubscribe with WikioSeries Hybrid vs. Parallel Hybrid
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/AFQn8mTgFWQ/series-hybrid-vs-parallel-hybrid-04-18-2012
<em><span style="font-size: larger;"><span style="font-family: Verdana;">Hybrid </span></span></em><span style="font-size: larger;"><span style="font-family: Verdana;">refers to something that is made up of two or more diverse ingredients. The goal in combining them is to capture and merge the advantages of each ingredient, while overcoming any disadvantages. But ingredients can be combined in many ways, resulting in considerable variation in performance depending on how they are combined. <br />
<br />
In optimization search algorithms, as with electric vehicles, there are two main categories of hybrids: <em> series </em>and <em>parallel</em>. To better understand the basic </span></span><span style="font-size: larger;"><span style="font-family: Verdana;">differences between the series and parallel hybrid approaches, let’s </span></span><span style="font-size: larger;"><span style="font-family: Verdana;">consider a </span></span><span style="font-size: larger;"><span style="font-family: Verdana;">simple illustration. </span></span><br />
<br />
<em><span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" src="http://www.redcedartech.com/images/baton_border.jpg" alt="Passing baton" style="width: 297px; height: 201px;" /></span></span></em><span style="font-size: larger;"><span style="font-family: Verdana;">Suppose a team of people needs to carry an object a long distance. In a <em>series hybrid</em> strategy, one person will carry the object for a while, and then someone else will take the load and carry it a bit further. This “tag team” approach continues until the required distance is covered. This series approach might work well if the object is lightweight and small. But if the object is heavy or awkwardly shaped, then it will be difficult for one person to carry it even a short distance, if he or she can move it at all. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" width="296" height="200" src="http://www.redcedartech.com/images/rowers_border.jpg" alt="Rowers" style="" /></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">In this situation, it is clearly more </span></span><span style="font-size: larger;"><span style="font-family: Verdana;">effective for two or more people to </span></span><span style="font-size: larger;"><span style="font-family: Verdana;">carry the object together in a <em>parallel hybrid</em> approach. By working together in a well-coordinated effort, the load can be shared in a way that allows each participant to contribute to the task. Each contribution, however small, reduces the load on other team members, allowing the group to carry the load faster and further with less fatigue. Drivers of horse-drawn wagons, dog sleds and Christmas sleighs discovered this truth a long time ago. <br />
<br />
<strong>Series hybrid optimization</strong><br />
Turning our attention to optimization, a <em>series hybrid algorithm</em> is developed by starting with one search algorithm, and then switching to another one (using a different strategy than that of the first algorithm) to continue the search. There is no limit to the number of different search strategies that can be used in this sequential manner. <br />
<br />
Typically, a series hybrid algorithm begins with a search method that is good at global exploration, such as a Genetic Algorithm, and ends with a local refinement strategy, such as a gradient-based algorithm. Various other search methods can be sandwiched between these two. On some problems, this type of series optimization algorithm has been shown to perform reasonably well compared to monolithic (single-strategy) algorithms, when an appropriate set of algorithms and tuning parameters has been chosen. <br />
<br />
How well a series hybrid optimization strategy performs depends on the specific algorithms and tuning parameters used at each stage of the search. Because each algorithm is working alone, the progress made at any time depends on how effective the selected method is for that problem and what it does with the information provided by previous search methods. <br />
<br />
As I’ve mentioned in other posts, it is usually impossible to know which algorithms or values of tuning parameters will work well on a problem before it is solved. So, series hybrid algorithms have the same fatal flaw as most monolithic strategies, except the number of unknowns is now multiplied by the number of different strategies used. <br />
<br />
Moreover, additional unknowns are introduced, such as the order of the strategies and when to stop one strategy in favor of another. Default values for these parameters may or may not work well for your current problem. <br />
<br />
<strong>Parallel hybrid optimization</strong><br />
<em>Parallel hybrid algorithms</em>, like SHERPA (in <a target="_blank" href="http://www.redcedartech.com/products/heeds_mdo">HEEDS<sup>®</sup> MDO</a>), overcome many of the shortcomings of series hybrid algorithms. In this strategy, multiple optimization methods actually work simultaneously to solve a problem in a collaborative fashion. Rather than contributing sequentially, these methods work together to search a design space and identify optimized solutions, like many hands helping to carry a heavy load. <br />
<br />
As with any good team, a parallel hybrid algorithm requires good leadership, communication, coordination, and accountability. These attributes are built into the algorithm’s infrastructure from the start. <br />
<br />
Instead of separately exploring and refining at different stages of a search, a parallel hybrid algorithm enables these two essential activities to take place concurrently and synergistically! This not only speeds up the search but also makes it more likely to find the global optimum. <br />
<br />
In a series hybrid algorithm, the search history can be used to determine which individual algorithm(s) made the most meaningful contribution to the search. But this is not possible with a parallel hybrid algorithm, because each algorithm behaves very differently as part of a team than it would individually. <br />
<br />
Nevertheless, there are ways to hold an individual search strategy accountable for its contributions within a parallel hybrid algorithm, and those methods that do not contribute enough over time can be replaced by new methods or have their resources transferred to existing methods that are contributing at a higher level. <br />
<br />
The characteristics of a well-designed parallel hybrid optimization algorithm include shared discovery, intellectual diversity, synergistic search, and greater robustness. Oh, and better designs, faster!</span></span><br />
<br />
<br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/AFQn8mTgFWQ" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/series-hybrid-vs-parallel-hybrid-04-18-2012#commentsWed, 18 Apr 2012 14:37:44 +0000Ron Averill68 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/series-hybrid-vs-parallel-hybrid-04-18-2012Collaborative Optimization
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/UEJ8oHqItCo/collaborative-optimization-03-19-2012
<span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" src="http://www.redcedartech.com/images/collabopt.png" style="width: 306px; height: 200px;" alt="Collaborative optimization" />Many engineers still resist the use of optimization algorithms to help improve their designs. Perhaps they feel that their hard-earned intuition is just too important to the solution process. In many cases, they are right. <br />
<br />
At the same time, most optimization algorithms still refuse to accept input from engineers to help guide their mathematical search. The assumption is that the human brain cannot possibly decipher complex relationships among multiple system responses that depend upon large numbers of connected variables. Unfortunately, this is true. <br />
<br />
Is this a case of irreconcilable differences? Or is it simply an example of everyone wanting to be the teacher, and no one wanting to be the student? I’m reminded of the Latin proverb: <br />
<br />
<blockquote>
<div><span style="color: rgb(51, 51, 51);">“By learning you will teach; by teaching you will learn.” </span></div>
</blockquote> Surely engineering intuition can benefit from the results of mathematical exploration, and vice-versa. It seems almost obvious. <br />
<br />
So why do engineers and optimization algorithms prefer to work solo? I don’t believe they prefer this. I think it is more a matter of not knowing how to collaborate, or not having the tools to facilitate this interaction. <br />
<br />
Fortunately, modern optimization software tools like HEEDS now have features that encourage engineers to learn from the intermediate results of an optimization study and to share intuition-based insights with the optimization algorithm <em>during a search</em>. This <em>collaborative optimization</em> process leverages our two most powerful design tools – human experience and computers. <br />
<br />
Consider the following example:<br />
<br />
<ul>
<li>An engineer uses intuition and experience to define the goals of an optimization problem and a baseline (starting) design.</li>
<li>An optimization search algorithm then begins to explore the design space to uncover mathematical relationships that can lead to an optimized design, all the while sharing its progress and discoveries with the engineer.</li>
<li>While monitoring, validating and interpreting these intermediate search results, the engineer starts to learn what makes some designs better than others. This new understanding causes the engineer’s intuition to practically blurt out, “If design B is better than design A, then design C should be even better!” Of course, the optimization algorithm might eventually discover design C on its own, but it would surely take a lot longer to do so.</li>
<li>The engineer shares his insight with the optimization algorithm, which happily accepts the input and puts it to use immediately. If the engineer was correct, then the algorithm now has new information that will accelerate its search. If the engineer’s intuition did not lead to a better design, then the algorithm has only spent one design evaluation to discover this, and the new data may still have some valuable nuggets that can be exploited later in the search.</li>
<li>The circular process of <em>exploring </em>→ <em>monitoring </em>→ <em>interpreting </em>→ <em>sharing </em>is continuous throughout the search process, leading to better designs in much less time than was previously possible.</li>
<li>The enhanced communication between the engineer and the optimization algorithm builds a strong interdependent relationship between the two, leveraging the strength of each.</li>
</ul>
Collaborative optimization tears down one of the most common objections that experienced engineers have to using optimization methods. It not only makes full use of an engineer’s intuition, but improves that intuition through experience gained from mathematical exploration of the design space. This is accelerated learning at its best.<br />
<br />
Further, this is not one of those nice ideas that looks good on paper but doesn’t work well in practice. This technique has already been used very successfully on many challenging design problems, including composite aircraft, crashworthy cars and insulated vaccine carriers. <br />
<br />
There is now overwhelming evidence that a more intimate coupling of intuition with a hybrid, adaptive optimization algorithm can solve many challenging problems previously thought to be intractable. <br />
<br />
Of course, in order to find the best solutions many optimization algorithms tend to explore a design space broadly, even spending some time in those regions of the space that don’t yield any good designs. So the process helps us to understand not only what makes a good design, but also why some designs perform poorly. This reminds me of another important proverb:<br />
<br />
<blockquote>
<div><span style="color: rgb(51, 51, 51);">“Wise men learn by other men's mistakes, fools by their own.”</span></div>
</blockquote> </span></span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/UEJ8oHqItCo" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/collaborative-optimization-03-19-2012#commentsMon, 19 Mar 2012 19:40:09 +0000Ron Averill67 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/collaborative-optimization-03-19-2012Triathlon
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/WeDgEmFBank/triathlon-03-05-2012
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;"><img align="right" alt="Traithletes" src="http://www.redcedartech.com/images/cyclingsmall.png" />I am currently training to compete in my first sprint triathlon race. Well, <i style="mso-bidi-font-style:normal">compete</i> may be an exaggeration, and there won’t be much sprinting. But I do hope to cross the finish line before the sun sets. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">If you are unfamiliar with the sport, a sprint triathlon is a race with three components. Participants swim about one-half mile in a lake, then ride a bike about 12 miles along a marked road course, and finally run 3.1 miles to reach the finish line. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">To an athlete, this race sounds like a fun challenge. To an engineer, it is a fascinating multi-objective optimization problem.</span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">Clearly, the primary objective is to cross the finish line in the shortest time possible. But it’s still important to remember that the total race time is the <i style="mso-bidi-font-style:normal">sum</i> of the times necessary to complete three very different parts of the race – the swim, the bike and the run. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">In optimization, this is often referred to as a <i style="mso-bidi-font-style:normal">summed objective</i> problem. It is a good way to handle optimization problems containing multiple objectives that do not directly compete with one another. In other words, improving the value of one objective does not inevitably worsen another one. For example, swimming faster does not necessarily make you run slower, so these two objectives can naturally be summed together when seeking an optimal race strategy. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">However, non-competing objectives might still be strongly coupled. In a triathlon, if you spend too much energy swimming faster, then you won’t have enough energy remaining to bike or run at your best, resulting in a slower overall race time. Based on the athlete's level of skill and fitness, there is an ideal swimming pace that preserves just enough energy to perform optimally in the bike and run portions of the race. Similar arguments can be made about the effect of the bicycling pace on the run performance. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">So there is still a trade-off among the three objectives, but not enough to warrant treating the three objectives separately, as in a Pareto optimization. When the ultimate objective is pretty clear and the trade-offs are due more to interactions than competition, a summed objective approach is usually recommended. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">Interactions also play a role in how we define the scope of our optimization problem. Due to lack of time or interest, many triathletes focus their training on just one or two of the triathlon events, paying only minimal attention to the other one(s). This short-sighted approach usually leads to poor overall performance in the race. As already noted, improving performance in one or two events does not guarantee any improvement at all in the overall race time. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">To find a truly optimal strategy, you must optimize the <i style="mso-bidi-font-style:normal">interactions</i> among the components. Any improvement to the parts must be considered in light of its contribution to the whole. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">The same is true for optimization problems within engineering, science and business. We often focus our attention on improving a part within a system, perhaps the part that is failing or is most expensive. In doing so, we ignore the interactions among the various parts in the system, and we severely limit the scope and the potential benefits of the optimization search process. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">Consider the common goals of reducing mass or cost. It may be that by <i style="mso-bidi-font-style:normal">adding</i> mass or cost to a given component (blasphemy!), we can then reduce the mass or cost in other components to achieve an overall system level improvement. Isn’t this the ultimate goal, after all? But when we focus on individual parts and ignore their interactions, these opportunities remain hidden. </span></span></span></p>
<p class="MsoNormal"><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;">So yes, I do plan to swim slowly during my triathlon race in order to improve my overall time. That’s my story, and I’m sticking to it. </span></span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><span style="line-height: 115%;"><br />
</span></span></span></p><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/WeDgEmFBank" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/triathlon-03-05-2012#commentsMon, 05 Mar 2012 14:06:32 +0000Ron Averill66 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/triathlon-03-05-2012Connecting the Dots
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/nNBq-CtGdZM/connecting-the-dots-02-20-2012
<span style="font-size: larger;"><span style="font-family: Verdana;">There is little debate that Steve Jobs was one of the greatest innovators of our time. His curiosity and creativity are benchmarks for both individuals and companies. One of my favorite Jobs quotes appeared in Wired, February 1996:<br />
<br />
<blockquote>
<div><span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="Connecing the Dots" src="http://www.redcedartech.com/images/connectdots.png" /></span></span><span style="color: rgb(51, 51, 51);">“Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things. And the reason they were able to do that was that they’ve had more experiences or they have thought more about their experiences than other people.”<br />
<br />
“Unfortunately, that’s too rare a commodity. A lot of people in our industry haven’t had very diverse experiences. So they don’t have enough dots to connect, and they end up with very linear solutions without a broad perspective on the problem. The broader one’s understanding of the human experience, the better design we will have.” </span></div>
</blockquote><br />
Steve Jobs was not referring to mathematical optimization when he made these statements, but it would be difficult to find better words than these to motivate people to use a global optimization search process. Let me explain why. <br />
<br />
Most global optimization search algorithms are referred to as “multi-point” methods. They combine information from multiple design evaluations, often in disparate parts of the design space, to decide which new points to evaluate next. The goal is to mix the useful elements of one design with the promising features of another design to generate an even better solution. <br />
<br />
When a global optimizer decides to evaluate a point far away from what seems to be a promising region of the design space, the purpose is to discover new designs and ideas that might improve existing or future designs. It’s a lesson Jobs learned many times during his career. Like when he chose to study calligraphy after dropping out of college, and then years later called upon that experience to develop the fonts that made the original Apple computer so appealing. <br />
<br />
As Jobs suggested, the more designs you’ve already evaluated and the more diverse these designs are, the greater the number of elements and features available to be connected. Even when a good design is already in hand, breaking through to the next level of performance may require a new idea from a different part of the design space. This is why as a local search is refining a design, you should continue to broadly explore the design space for that key idea – the one that, when connected with one of your existing designs, can cause real innovation to occur. <br />
<br />
When most of the designs in a multi-point search process start to look similar, the search stalls. There are no more new experiences available to connect. The broad perspective is lost. The search becomes more localized (“linear”), and the chance of making other than incremental improvement is slim. <br />
<br />
For this reason, multi-point search methods take great care to maintain diversity in the pool of design candidates throughout the entire search process. This helps to ensure that the search will not get stuck in a local valley, and that there will be plenty of dots to connect during later search cycles. </span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/nNBq-CtGdZM" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/connecting-the-dots-02-20-2012#commentsMon, 20 Feb 2012 14:41:02 +0000Ron Averill65 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/connecting-the-dots-02-20-2012How Much Is Enough?
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/nrlBBCA5u4M/how-much-is-enough-02-07-2012
<span style="font-size: larger;"><span style="font-family: Verdana;">When it comes to money, people have different perspectives about how much it takes to be satisfied or feel rich. According to oil tycoon John D. Rockefeller, the answer is “<em>just a little bit more</em>.”<br />
<br />
In multi-disciplinary design optimization (MDO), a similar question comes up: “how many design evaluations are needed to converge on the optimal solution?” Unfortunately, as with money, there is no universal answer. The number of evaluations required depends on the problem you are trying to solve, your approach to solving it, and the starting point for the solution. Let’s unpack this a bit. <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="How Much Is Enough" src="http://www.redcedartech.com/images/howmuch2.png" /></span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"> When you define an optimization problem in terms of its objectives, constraints and variables, you are also defining a mathematical design space. Visually, this design space resembles a mountainous landscape. But, you don’t actually know what this landscape looks like, which is why you need an optimization algorithm to search it. And it’s difficult to estimate how many steps will be needed to find the lowest valley in a mountain range you’ve never seen. <br />
<br />
Further, each optimization algorithm uses its own unique strategy to search the design space. Its effectiveness and efficiency in performing the search depend, in large part, on how well the selected strategy works on the type of space you’ve defined. This is difficult to know ahead of time, because you may not know even some of the basic characteristics of your design space. <br />
<br />
Finally, for a given problem statement and search algorithm, the number of design iterations needed to find the optimal solution still depends on the solution starting point, often called the baseline design. It is usually better to start with a baseline design that is closer to the optimal design, but not always. Again, you don’t know where the optimal design is before you start. <br />
<br />
Considering all of these unknowns, it is nearly impossible to accurately predict how many evaluations it will take to find the optimal solution to a design problem. If you are like many people, this uncertainty might make you feel uncomfortable at first, or reduce your confidence in finding the solution you seek. In this case, you have two options. Either find a different process, or figure out how to manage the uncertainty. <br />
<br />
In an effort to find a process that makes you feel more in control, you may settle for an approach that leads to inferior solutions or requires more work in the long run. For example, you might choose to do a design of experiments (DOE) with a fixed number of samples instead of actually searching the design space. Or, you might decide to use a search algorithm that you understand well, but that may not be appropriate for the current problem. <br />
<br />
While these approaches might make you feel better initially, you have just traded one set of uncertainties for another, and perhaps significantly reduced the chances of finding a solution that achieves your true goal. <br />
<br />
It is much better, I think, to contend with the uncertainties associated with selecting a number of evaluations to use during optimization. This is not as difficult as it seems. <br />
<br />
Typically, the more evaluations you allow an algorithm to use during its search, the better its chance of finding the optimal solution. But, running a very large number of evaluations is seldom an option. Often there isn’t enough time, or computing resources, to perform the number of design evaluations you would need to find <em>the </em>optimal solution. In these cases, the goal is to reach the greatest possible design improvement within the available time, and the recommended number of iterations is <em>as many as you can afford to do</em>. The uncertainty is then virtually eliminated. Also, you are more likely to find a better solution by using a hybrid, adaptive search algorithm than by performing a fixed set of experiments. <br />
<br />
Additionally, there are rules of thumb that help us estimate the number of evaluations needed to find a “good” solution, if not the true optimal. These estimates generally depend on the number of design variables and can be refined if you do know some of the characteristics of your design space. The rules of thumb differ from algorithm to algorithm and are more reliable for those algorithms that work well on a broad range of problems. <br />
<br />
Modern optimization algorithms and software also allow you to restart a search from practically any point in a study. So, if you still have some time left and you want to improve the current solution, the search process can easily be restarted to do “<em>just a little bit more</em>” exploration.</span></span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/nrlBBCA5u4M" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/how-much-is-enough-02-07-2012#commentsTue, 07 Feb 2012 19:10:23 +0000Ron Averill64 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/how-much-is-enough-02-07-2012Location! Location! Location!
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/cHapzdwnJrU/location-location-location-01-24-2012
<span style="font-size: larger;"><span style="font-family: Verdana;"><img hspace="24" height="199" align="right" width="300" vspace="24" alt="Brick row houses" src="http://www.redcedartech.com/images/locationlocation.png" />What are the three most important things to consider when buying a house? Location. Location. Location. The repetition intentionally over-emphasizes this point: if the location of your new real estate purchase is not good, then the details of the home don’t really matter. <br />
<br />
A spacious kitchen and updated bathrooms are nice, but even a perfect house located near the end of a busy airport runway will not bring a high value in the real estate market. For this reason, many savvy investors buy the worst house in a great neighborhood and then upgrade it to suit their tastes, rather than buying the best house in a mediocre neighborhood.<br />
<br />
So should you spend money to improve your current residence, or use those funds to move to a different neighborhood altogether? Clearly, the answer depends on the quality of your current location.<br />
<br />
Engineers face a similar dilemma when performing optimization. “Should we perform a local optimization to refine our existing design, or expand our search in hope of finding a more globally optimal solution?” As with real estate, the answer depends on the location of your existing design within the overall design space. <br />
<br />
A local optimum usually lies within the same neighborhood as the current design. The designs in this neighborhood have similar features and perform in a similar manner, but small differences in these features might have a big impact on the overall level of performance. <br />
<br />
On the other hand, a global optimum might be located in a different neighborhood of the design space. Not every design in this neighborhood is necessarily better than the current design, but a different layout or combination of features gives designs in this neighborhood much greater potential to perform at a higher level. The best design in this neighborhood will perform better than the best design in any other neighborhood. Even sub-optimal designs in this neighborhood are often superior to the best designs in surrounding areas. <br />
<br />
So unless you are really confident that fine tuning your current design will provide the design performance you seek, a global search is often the best way to achieve a high return on your optimization investment. <br />
<br />
The very best approach, of course, is to perform both global and local optimization at the same time. In this way, broader exploration helps you to find the right neighborhood, while local optimization helps to uncover the real potential of those “diamond in the rough” designs in new neighborhoods. <br />
<br />
Modern hybrid adaptive optimization algorithms are built upon this important concept of doing global and local optimization simultaneously, and these algorithms have proven to generate superior designs faster than other algorithms can on most problems. <br />
<br />
Just as the smartest real estate investor knows how to prosper in any real estate market, the savviest optimization algorithm knows how to explore both globally and locally to find the best design in the shortest time.</span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/cHapzdwnJrU" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/location-location-location-01-24-2012#commentsTue, 24 Jan 2012 14:20:48 +0000Ron Averill63 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/location-location-location-01-24-2012Forgotten Reasons
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/PMI5_RWTzX0/forgotten-reasons-09-27-2011
<span style="font-size: larger;"><span style="font-family: Verdana;"><img vspace="6" hspace="6" align="right" alt="Grandma Cooking" src="http://www.redcedartech.com/images/grandmacooking.png" />You may have heard the story about the woman who always sliced about one inch off the end of a large roast before placing it in the pan to be cooked. When asked why she did this, she did not know the reason. But she was sure that it was important, because her mother always did exactly the same thing. <br />
<br />
Now curious, the woman called her mother to ask why it is important to cut off the end of a roast before cooking it. Her mother did not know the reason, but she was confident that it was important, because her mother always did exactly the same thing.<br />
<br />
A phone call to the woman’s grandmother finally revealed the true reason why two subsequent generations of cooks always cut off the end of a roast before cooking it. The grandmother explained, “A long time ago, the only size of roast at the local store was too large to fit in my pan. So I had to cut a bit off the end in order to cook it. I haven’t had to do that in years!”<br />
<br />
I’ll bet the entire family had a good laugh about this situation. Many years ago, there was a really good reason to cut off the end of the roast, but that reason didn’t exist anymore. Yet that step in the process was passed down to future generations, as though it were crucial to the success of the meal. <br />
<br />
There are probably many situations in which current limitations give rise to a process that continues to be used long after those limitations are gone. Often the facts become blurred and the philosophical reasoning becomes stronger, so no one questions whether the process is valid. A paradigm is created that is not easily broken. <br />
<br />
For example, it is still common today to perform a sensitivity study prior to numerical optimization. This process has existed for several decades and has been passed down through generations of engineers. The goal of a sensitivity study is to identify the variables that have the most influence on the performance of the system being optimized. The standard reasoning is that these variables are the ones that should be used during the optimization study, while the others can be neglected. Let’s explore the soundness of this process.<br />
<br />
Twenty years ago, computing resources were somewhat limited, so often only a small number of design evaluations could be performed in the time available for a design study. Simulation models were not very robust, so only minor changes to a model could be considered without significant manual intervention. Consistent with these restrictions, most design optimization algorithms relied on sensitivity derivatives and were capable of finding incremental improvements within a small neighborhood around the initial (baseline) design. This small neighborhood represented the design space to be searched.<br />
<br />
For many problems, the sensitivity derivatives are not expected to change very much within this small neighborhood, so the gradients associated with the baseline design are a reasonable approximation of the gradients within the entire design space. In this scenario, selecting important variables based on sensitivity gradients is a clever, and mathematically consistent, thing to do. <br />
<br />
Today, however, our computing technology and optimization software have improved dramatically, allowing us to significantly broaden the scope of design optimization studies. The designs we seek may not necessarily be near the baseline solution. In fact, we often hope to find designs that are innovative, with properties that can be quite different from those of our baseline design. To accomplish this, we use global optimization methods that explore larger design spaces than before, with more variables and larger variable ranges. <br />
<br />
In a large design space that is highly nonlinear and perhaps even multi-modal (i.e., with many peaks and valleys), the derivatives at any single point, such as the baseline design, have no relevance to the rest of the design space. In other words, the values and ranking of the sensitivity derivatives might change dramatically from one design point to another. <br />
<br />
In this case, it is mathematically inconsistent to eliminate variables based on the sensitivity derivatives of the baseline design. That’s a nice way of saying “it’s wrong, so don’t do it.” <br />
<br />
Some of the eliminated variables might be needed to get to the region where the truly optimal design lies, so if you screen out variables based on sensitivity derivatives, you’ll often get sub-optimal designs. You have essentially forbidden the search algorithm from looking in some of the most fruitful regions of the design space. <br />
<br />
In many cases, calculating sensitivity derivatives up front is a whole lot of wasted work that leads to inferior designs. Worst of all, it is often done for reasons that no longer exist. </span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/PMI5_RWTzX0" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/forgotten-reasons-09-27-2011#commentsTue, 27 Sep 2011 18:24:40 +0000Ron Averill62 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/forgotten-reasons-09-27-2011A Brief History of Optimization
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/Cfs7V5ruxWY/a-brief-history-of-optimization-09-14-2011
<span style="font-size: larger;"><span style="font-family: Verdana;"><img width="285" height="200" align="right" alt="Light Bulbs" src="http://www.redcedartech.com/images/lightbulbs.png" />When Thomas Edison developed the first long-lasting, high-quality light bulb in 1879, his successful design was the result of a lengthy and laborious <em>trial-and-error </em>search for the best filament material, a process we now call the <em>Edisonian </em>approach. <br />
<br />
While Edison had no fundamental knowledge of how various materials resist electrical current, today’s engineers are often armed with greater technical knowledge and experience about their domain. This allows them to create initial designs based on intuition before testing the designs to failure. Design flaws observed during a test can then be incrementally improved through what we call the <em>make-it-and-break-it</em> method. <br />
<br />
Advances in computing power and in computer-aided engineering (CAE) software now make it possible to create <em>virtual prototypes</em> of potential designs prior to building and testing expensive physical prototypes. This reduces the cost and time required to perform each design iteration and provides greater understanding of how a design performs.<br />
<br />
However, the potential of the virtual prototyping approach is still limited by two main factors. First, each iteration requires an engineer to <em>manually </em>create or modify a computer model of the system. Despite ever improving software, this is still a cumbersome, error-prone and time-consuming process. <br />
<br />
Second, the success of this approach still relies heavily on the limitations of human <em>intuition and experience</em>. No matter how brilliant the design team is, the human mind often cannot predict or comprehend the effects of changing multiple variables at the same time in a very complex system. This profound barrier, coupled with time constraints, severely limits the number and types of design iterations that get performed. The result is too often a sub-optimal solution that is an artifact of the team’s collective experience.<br />
<br />
The desire to increase productivity led naturally to automation of the design iteration process. Process automation software was introduced that could capture and execute automatically the typical manual process to build and test a virtual prototype. Thus, each new design iteration could be performed more quickly, and without the usual drudgery and fear of manual errors. <br />
<br />
It soon became obvious that exploration of the design space could also be automated by adding a smart “do-loop” around a design evaluation process, and an instant market was created for all of the classical methods of optimization and design of experiments. <br />
<br />
With the promise of reducing design time and cost while improving product quality, automated design optimization held tremendous potential. Starting with a sub-optimal design, a numerical optimization algorithm could be used to iteratively adjust a set of pre-selected design parameters in an attempt to achieve a set of design targets. <br />
<br />
But the promise of this approach was not easily realized. It was quickly discovered that classical numerical optimization algorithms have significant limitations when applied to a wide range of commercial design problems. Many algorithms are applicable only to certain types of design variables, or when the number of design variables is small. They may produce smaller improvements than could be attained, or the end result may depend on the starting design. And those methods that have broader search capability are often very inefficient. Moreover, selecting the right algorithm for a problem and setting its tuning parameters turned out to be a complex research problem in itself, usually requiring an iterative process not too dissimilar from the Edisonian approach described above. <br />
<br />
As a result, the practical application of design automation tools based on classical optimization technology has most often resulted in only incremental improvements at best, a small benefit compared to the promise of the technology. Many optimization tools in use today still fall into this category.<br />
<br />
Fortunately, with advanced hybrid and adaptive design search algorithms available, the capability now exists to efficiently explore a much larger and more complex design space. These methods take full advantage of powerful, inexpensive computers and networks to modify virtual system models, while intelligently searching for optimal values of design parameters that affect product performance and cost. <br />
<br />
This means that designers can consider a larger number of design variables each with a wider range, providing two key benefits: First, designers don’t have to waste time simplifying the definition of a problem into a form that a traditional optimization algorithm can handle. Second, and most importantly, a wider initial definition of the design problem dramatically increases the chance of discovering a significantly better design, perhaps even a new design concept that is outside the initial intuition of the engineering team. <br />
<br />
This new class of optimization technology enables broader, more comprehensive and faster searches for innovative designs than was possible using previous generations of tools. Moreover, it requires no expertise in optimization theory, so it is actually easier to use by non-experts and experts alike. By leveraging an engineer’s potential to discover new design concepts, this new class of optimization technology overcomes the limits of human intuition and extends the designer’s professional capability to achieve break-through designs and accelerated innovation.</span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/Cfs7V5ruxWY" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/a-brief-history-of-optimization-09-14-2011#commentsWed, 14 Sep 2011 20:35:30 +0000Ron Averill61 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/a-brief-history-of-optimization-09-14-2011Mind versus Machine
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/3iRpMLRxywQ/mind-versus-machine-07-19-2011
<span style="font-size: larger;"><span style="font-family: Verdana;">In 1997, IBM’s Deep Blue computer won a six-game chess match against world champion Gary Kasparov. More recently, IBM’s Watson supercomputer defeated two of the greatest <em>Jeopardy </em>champions of all time during a three-day competition. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="Mind versus Machine" src="http://www.redcedartech.com/images/mindvsmachine.png" /></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Despite these impressive exhibitions, machines cannot think like humans – yet. For example, given different sets of information, humans have a natural ability to detect patterns and perceive order. Computers, on the other hand, must be told what types of pattern structures to look for. <br />
<br />
But Deep Blue and Watson have demonstrated that when the rules of the game are well defined and the desired patterns are known, humans can program computers to solve these problems better or faster than our own minds can solve them.<br />
<br />
Another example is engineering design optimization, where mathematical computer algorithms are being used to optimize engineered systems and components. In this context, the roles of mind and machine are still being debated. Unlike in chess or <em>Jeopardy</em>, no one has proposed a suitable competition to determine whether machines can outperform humans on design optimization tasks. <br />
<br />
In my opinion, it is not a question of mind versus machine. The more important issue is how we can best leverage mind plus machine. While there is some overlap in their skill sets, the most powerful capabilities of the human mind complement those of computers. <br />
<br />
Let’s consider two different processes for designing a structural component. We’ll call them A and B. In each case, we will iterate on the geometry of the part to find a design that minimizes total mass while maintaining the stresses below a certain limit. We’ll use a finite element model to estimate the stress distribution for each design.<br />
<br />
During each iteration of Process A, we use a graphical post-processor to visualize the distribution of stresses throughout the entire part. Relying on the human mind, we make design modifications based on intuition and knowledge about how stress flows. Process A is a typical manual design process, in which a human can modify a design based on intuition and knowledge about the problem.<br />
<br />
In Process B, only the value of the maximum stress is available for each design. Relationships between shape changes and stress must be inferred solely on the basis of this single output value, since we do not know the distribution of stress throughout. Process B is a typical automated multi-disciplinary design optimization (MDO) search process, in which the mathematical MDO algorithm conducting the search has no intuition or knowledge about the physics of the problem. <br />
<br />
When a problem is relatively simple, or similar to one that has been solved before, the human mind is often faster at finding a desirable solution than a computer algorithm would be. This is especially true when a full-field graphical solution is available to provide feedback during each design step. Our intuition and understanding of the problem are very valuable in these cases. <br />
<br />
On the other hand, if a human attempted to solve even a simple problem using Process B, the lack of information about each design would make it extremely difficult to suggest useful design changes that meet the goals. A computer-based MDO process can usually solve this problem very effectively, even if it does require more design iterations than might be needed for Process A. <br />
<br />
As the problem becomes more complex, the human mind has a diminishing ability to process the relationships between a large number of variables and several conflicting design criteria. In contrast, computer algorithms can search for patterns and complex relationships within very large data sets. <br />
<br />
Greater complexity also makes it more difficult to define an appropriate design problem and to interpret the results of a design study. It is here that human knowledge, experience and intuition play their most important roles.<br />
<br />
So trying to achieve an entirely automated design process is at best a foolish objective. Computer algorithms still only do what they are taught to do, and the human mind has way too much to offer in terms of creative insight – at any level of complexity. Rather than aiming for an entirely automated process, we should seek ways to creatively combine mathematical search and human intuition in a hybrid strategy that leverages the strongest attributes of each tool. <br />
<br />
No computer can tell us how to design such a process. </span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/3iRpMLRxywQ" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/mind-versus-machine-07-19-2011#commentsTue, 19 Jul 2011 19:46:06 +0000Ron Averill60 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/mind-versus-machine-07-19-2011Basic Principles of a New Optimization Paradigm
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/tTsnMvO-KHc/basic-principles-of-a-new-optimization-paradigm-03-15-2011
<span style="font-size: larger;"><span style="font-family: Verdana;">Change is often viewed as the result of a scientific discovery or the development of a breakthrough technology. But there's usually a lot more to the story. To quote Paul Saffo - <em>technology doesn't drive change, it enables change</em>. </span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="Chain Saw" src="http://www.redcedartech.com/images/chainsawsmall.jpg" />When the chainsaw was first introduced, I wonder how many lumberjacks tried dragging it back and forth against a tree, expecting it to work the same way as a hand saw. Figuring out the best use of a new technology is just as important, and sometimes as difficult, as developing the technology in the first place. </span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Too often, the real value of a new technology becomes enslaved by old notions about how things work or what is possible. To realize the full advantages of a new technology, we need to look at things differently and accept new possibilities.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Consider the way design optimization was performed about twenty years ago. Computing power was limited, computer-aided engineering (CAE) models were not very accurate or robust, and optimization algorithms were restricted to simple problems. For most real-world commercial applications, it was virtually impossible to search a broad or complicated design space. So, given those circumstances, a reasonable approach to optimization was established:</span></span>
<br /><br />
<ul>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">First, a "good" baseline design was developed using lots of manual iterations. Optimization was considered a finishing tool, not an exploration tool.</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Second, a design of experiments (DOE) study was performed to find out which parameters have the greatest influence on the baseline design, a process referred to as variable screening. Even though these sensitivity derivatives are only accurate in a small neighborhood around the baseline design, they can be valid for a design space when the optimization study is limited to a very small range.</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Finally, using only a few of the most important variables, identified during the screening study, a local optimization study was performed over a narrow range of the parameter space. Often, this study used a simple response surface or a gradient-based search algorithm because the number of variables and the variable ranges were very small.</span></span></li>
</ul>
<span style="font-size: larger;"><span style="font-family: Verdana;">This process usually resulted in only an incremental improvement in the design, which was acceptable considering the available technology.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Now, fast-forward to the present. Computing power is virtually scalable. CAE models are accurate and reliable most of the time. And, optimization algorithms can effectively search large and complex design spaces. Yet many engineers and scientists still perform design optimization the way it was done twenty years ago, when severe limitations on computing, modeling and search algorithms ruled the day.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">In essence, we have substituted modern computers, models and algorithms into our old school workflow. While we have occasionally seen some improvements, the results have mostly been disappointing.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Emboldened by the power of new technology, we define design spaces that are broader and contain more variables. But, in doing so, we often violate some of the assumptions that made our old school workflow valid, and we introduce inefficiencies into the process.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Complicating the situation further, we have introduced a long and growing list of optimization algorithms to address a host of different problem types. Even optimization experts can't figure out the best method to use for a given problem.</span></span>
<br /><br />
<span style="font-size: larger; "><span style="font-family: Verdana;">It's no surprise why many engineers are convinced that optimization doesn't work. They are dragging that chainsaw back and forth against the tree, expecting it to work better than a handsaw!</span></span>
<br /><br />
<span style="font-size: larger; "><span style="font-family: Verdana;">I'm not saying that we should avoid larger design spaces with more variables. This is an important part of finding better, and even innovative, solutions. And advanced optimization strategies are certainly important for solving today's challenging problems. But we won't get consistently satisfactory results until we rethink how we use modern optimization technology.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Recently, a new paradigm has emerged for design optimization - one that is enabled by game-changing discoveries in optimization search technology and that leverages ongoing advances in computing power and virtual prototyping. This paradigm is free of the constraints imposed by previous technology and is based on a set of principles that allow a more natural flow of thought and effort:</span></span>
<br /><br />
<ol>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Start with a good concept, not necessarily a good design.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (Let the optimizer do the work of searching for good designs.)</span></span></li>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Optimize early and often.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (Not just at the end of the design cycle or after all other means have been exhausted.)</span></span></li>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Define the design problem you need to solve.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (Not the one that can be solved by a certain optimization strategy.)</span></span></li>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Optimize the system interactions.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (Not the components.)</span></span></li>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Let the optimization algorithm figure out how to search the design space.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (There's often no way to guess ahead of time which search method and tuning parameters will work best.)</span></span></li>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Don't perform optimization using models of your models.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (Response surface or surrogate models often increase effort and error.)</span></span></li>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Be an engaged participant in the optimization search.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (Leverage your knowledge and intuition during a collaborative search process.)</span></span></li>
<li><strong><span style="font-size: larger;"><span style="font-family: Verdana;">Care about the sensitivities of your final design.</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;"> (Not those of your initial guess, which often have no bearing on the final design.)</span></span></li>
</ol>
<span style="font-size: larger;"><span style="font-family: Verdana;">A design optimization process based on these principles is simpler and consistently yields better solutions in less time. Implementing this process requires solid engineering skills, but you don't need to be an expert in optimization theory.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Design optimization software built around these principles is also simpler to use, because the workflow for defining, solving and post-processing an optimization study is cleaner, with fewer steps and tricky decisions to make.</span></span>
<br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Over the next several weeks, I will provide more detail about each of the above principles. In the meantime, be careful with that chainsaw!</span></span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/tTsnMvO-KHc" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/basic-principles-of-a-new-optimization-paradigm-03-15-2011#commentsTue, 15 Mar 2011 19:24:19 +0000Ron Averill59 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/basic-principles-of-a-new-optimization-paradigm-03-15-2011Be Water
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/rh_cBC61aGw/be-water-02-08-2011
<blockquote>
<div><span style="font-size: larger;"><span style="font-family: Verdana;">“Be formless, shapeless, like water. Now, you put water into a cup, it becomes the cup. Put it into a teapot, it becomes the teapot. Now, water can flow, or creep, or drip, or crash. Be water, my friend.” - Bruce Lee</span></span></div>
</blockquote><br />
<span style="font-family: Verdana;"><span style="font-size: larger;"><img hspace="12" height="216" width="319" vspace="12" align="right" alt="Falling water" src="http://www.redcedartech.com/images/fallingwatersm.png" /></span></span><span style="font-family: Verdana;"><span style="font-size: larger;">When martial artist Bruce Lee offered the above advice about being a</span></span><span style="font-family: Verdana;"><span style="font-size: larger;">daptable, I d</span></span><span style="font-family: Verdana;"><span style="font-size: larger;">oubt that he was referring to mathematical optimization. But </span></span><span style="font-family: Verdana;"><span style="font-size: larger;">these words of wisdom are certainly relevant to optimization algorithms. <br />
</span></span><br />
<span style="font-family: Verdana;"><span style="font-size: larger;">Searching a design space is a lot like navigating a mountain range, and we know that no two mountain ranges are alike. </span></span><br />
<span style="font-family: Verdana;"><span style="font-size: larger;"><br />
Even while traveling within a given range, you are likely to encounter several different types of terrain – smooth and rolling in some areas, rocky and rugged in other areas. Many design spaces are like this, as well. </span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Yet most optimization search algorithms use a fixed strategy for every problem, even though it is often impossible to predict the characteristics of a newly defined design space. </span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Selecting an optimization algorithm that uses a single strategy is like packing climbing tools and supplies for a single type of terrain without knowing what type of terrain you will actually encounter or how long the climb will take. Such poor preparation on the part of a mountain climber would seem foolish, but many optimization studies are based on equally bad planning. </span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">A wise mountain climber prepares for the type of terrain he expects to encounter, but also carries a variety of tools in case he needs to adapt to other types of landscapes. </span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">How do you plan properly when you don’t know what situation you are planning for? And how can an optimization algorithm effectively search a design space when its landscape characteristics are unknown? </span></span><br />
<br />
<span style="font-family: Verdana;"><span style="font-size: larger;">The answer is very simple. “Be water, my friend.”</span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/rh_cBC61aGw" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/be-water-02-08-2011#commentsTue, 08 Feb 2011 20:47:04 +0000Ron Averill58 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/be-water-02-08-2011Using Knowledge Smarter
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/JH29TYvbvOU/using-knowledge-smarter-02-01-2011
<blockquote>
<div><span style="font-size: larger;">“The world is changing very fast. Big will not beat small anymore. It will be the fast beating the slow.” – Rupert Murdoch</span></div>
</blockquote><br /><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="Runner" src="http://www.redcedartech.com/images/runner_small.png" />When computer aided engineering (CAE) analysis techniques, like the finite element method, were first introduced, their primary role was to investigate why a design failed. Surely, this understanding would help designers avoid such failures in the future. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">But soon, manufacturing companies realized that it was smarter to use CAE tools to predict whether a design would fail, before manufacturing. This gave designers the chance to make changes to designs and avoid most failures in the first place. This pass/fail test is still in place at many companies, in the form of scheduled iterations of computer aided design (CAD) drawings followed by CAE simulations. </span><br />
</span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Often, companies decide on a fixed number of manual CAD/CAE design iterations ahead of time. I’ve often wondered how project managers know exactly how many iterations it will take to arrive at the best design. Naturally, they haven’t figured the last-minute redesign “fire drills” and disorganized patchwork of final design changes into that preset number of design iterations.</span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Despite overwhelming evidence that a CAE-led design process usually yields <em>better solutions in less time</em>, CAD designers are still driving the iteration process at many organizations. There, CAE results are used just to verify the intuitions of the CAD designers or to suggest regions where the designers might consider making changes. <br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">In many cases, the minor role of CAE is made clear by the level of detail included in each set of CAD models. Often, even the very first CAD model contains a complete spec of every fillet, bolt and washer, including the heat treatment requirements for each material used. Sometimes, even the tolerance stacks are already computed! </span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><br />
Then, the first thing a CAE engineer must do is <em>defeature </em>the CAD model, or remove all of the geometric details that should not be included in an effective CAE math model. And, when the CAE simulations suggest the need for design changes, a large portion of the detailed CAD modeling effort is suddenly made obsolete. <br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Additionally, while the CAE models are being built, it is likely that the CAD team has already embarked on the next version of the design, making even the CAE simulation results obsolete before they are completed. In organizations with a CAD-driven design process, it is not surprising that the ratio of CAD designers to CAE analysts is much higher than it needs to be. <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
If our goal is to accelerate the invention of higher performing and more robust designs, then we must draw upon the full power of CAE technology to <em>lead </em>the design process. </span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">CAD models should be built to <em>support </em>the CAE process, not the other way around. Design directions should be determined by knowledge-creating CAE simulations, not an intuition-limited CAD process. The number of design iterations should be governed by the problem complexity, not an ad hoc decision now residing in a best practices document. </span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><br />
Moreover, today’s CAE-based automated optimization technology allows hundreds of design iterations to be studied in less time than most companies spend on two or three manual CAD/CAE iterations. These automated iterations are <span style="font-family: Verdana;">guided by intelligent, mathematical optimization algorithms that are not limited by human intuition or brain capacity. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Better solutions faster, using knowledge smarter. This is what media magnate Rupert Murdoch had in mind when he said, “<em>The world is changing very fast. Big will not beat small anymore. It will be the fast beating the slow</em>.”</span></span><br />
<br />
</span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/JH29TYvbvOU" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/using-knowledge-smarter-02-01-2011#commentsTue, 01 Feb 2011 16:25:10 +0000Ron Averill57 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/using-knowledge-smarter-02-01-2011By Tuesday
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/efvHA8jJpM8/by-tuesday-01-24-2011
<span style="font-family: Verdana; "><span style="font-size: larger; ">The goal of a mathematical optimization study is to find <em>the </em>optimal solution to a problem. And, when the problem at hand is simple enough to solve within the available time, we can achieve this goal consistently. <br />
</span><span style="font-size: larger; "><br />
<img align="right" alt="Tuesday" src="http://www.redcedartech.com/images/tuesday.png" />However, often we don’t have enough time or computing resources to carry out the number of design evaluations that would be needed to find <em>the </em>optimal solution. In these cases, we have no choice but to relax our goal and to seek the greatest possible design improvement within the available time. </span><span style="font-size: larger; "><span style="font-family: Verdana; "><br />
<br />
</span><span style="font-family: Verdana; "><span style="font-size: larger; ">A more direct statement of this would be, “Give me the best design you can find by Tuesday!”</span><span style="font-size: larger; "><br />
<br />
</span><span style="font-family: Verdana; "><span style="font-size: larger; ">It’s not always obvious, but achieving this new goal actually requires a different set of tools. Our standard optimization algorithms and operating procedures will not be very effective in this case. </span><span style="font-size: larger; "><br />
<br />
</span><span style="font-family: Verdana; "><span style="font-size: larger; ">Though we need an algorithm that will search very efficiently, there is clearly no time to try multiple algorithms in hope of finding one that is well suited to the current problem. Nor is there time to fiddle with the tuning parameters of a given algorithm to help it perform better on the problem at hand. </span><span style="font-size: larger; "><br />
<br />
</span><span style="font-family: Verdana; "><span style="font-size: larger; ">Moreover, most algorithms work exactly the same way regardless of how many design evaluations they are allowed to perform. They have a fixed strategy for every problem, and when the allowable number of evaluations is reached, they stop. This means that, even if an algorithm is well suited to find <em>the </em>optimal solution for our problem type, it may not be effective at finding an improved solution by our deadline. </span><span style="font-size: larger; "><br />
<br />
</span><span style="font-family: Verdana; "><span style="font-size: larger; ">In this scenario, we need an optimization algorithm that learns about a problem as it searches, one that adapts its search strategy based on both the current design space and the available design evaluations, one that delivers the greatest possible design improvement using whatever level of resources are available, and one that does all of this “by Tuesday.”</span></span><br />
</span></span></span></span></span></span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/efvHA8jJpM8" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/by-tuesday-01-24-2011#commentsMon, 24 Jan 2011 16:37:15 +0000Ron Averill56 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/by-tuesday-01-24-2011One Thing at a Time
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/zJ-2-Z_BUDA/one-thing-at-a-time-01-17-2011
<span style="font-size: larger;"><span style="font-family: Verdana;">Most of us have heard the advice, “Change only one variable at a time to understand how that variable affects your system.” <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="One green fish" src="http://www.redcedartech.com/images/fish_one_at_a_time_small.png" />Sometimes this advice is correct, but only in a very local sense. For example, if we want to estimate how sensitive a system is to a change in variable A, then we can hold all other variables constant and change variable A very slightly. The change in the system response divided by the change in variable A is an estimate of the sensitivity derivative at the original design point. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">But, the key word in the previous sentence is “point,” because <em>derivatives are defined at a point</em>. If we select a different starting design point, and then repeat the above exercise, we would expect to get a different value for the sensitivity derivative. </span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Let’s examine this idea further. If we hold variable B constant at value B1, changing variable A will have a certain influence on the system response. But if we hold variable B constant at value B2, the effect of variable A might be very different than before. If so, then the effect of variable A depends on the value of variable B. When this occurs, we say that there is an interaction between variables A and B. We can easily generalize this argument to many variables.</span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">This concept has several implications for an optimization study. First, consider the fairly common practice of calculating sensitivity derivatives at the baseline design point prior to performing optimization. Typically, the goal of doing this is to filter out those variables that seem to have little effect on the design, so only the most important variables are considered in the optimization study.</span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">This may seem like a good idea, but in fact it is very risky. While a certain group of variables may have a dominant influence within a small neighborhood around the baseline design, other design variables may be needed to guide the search to a truly optimal design outside of that neighborhood. Ignoring these other variables will lead to suboptimal solutions, which can be very costly in terms of unattained design improvement. Unfortunately, there is no way to know this ahead of time. <br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Unless we really are seeking only incremental changes to a design, the practice of filtering design variables prior to optimization seems both ill advised and wasteful. After all, we really don’t care about the sensitivity derivatives of the baseline design, and the significant effort required to calculate them is probably better spent on the optimization search.<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
Second, once we have arrived at an optimized solution, we do need to recalculate the sensitivity derivatives for that new design point. The results from a previous design may be completely unrelated to those of the new optimized design. <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
Finally, it is important to consider the interaction terms (mixed derivatives) in addition to the main effects (derivatives with respect to a single variable) when calculating sensitivity derivatives. It is possible for the main effect of a variable to be relatively small, while its interaction with another variable can be large. In this case, ignoring the interaction effect could lead to an incorrect conclusion about the robustness of a design.<br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;">So while it may not be prudent to include every imaginable design variable in every optimization study, we should also be careful not to filter out important design variables based on sensitivities at the baseline design. And, it is certainly unwise to limit the number of design variables so as to miss out on important interactions.</span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">As for the advice to change “one thing at a time,” it was probably relevant when computer resources were more limited and optimization studies were very local in nature. Today, we can add this advice to the growing list of restrictions that have been made obsolete by the power and speed of modern computers. </span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/zJ-2-Z_BUDA" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/one-thing-at-a-time-01-17-2011#commentsMon, 17 Jan 2011 21:57:41 +0000Ron Averill55 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/one-thing-at-a-time-01-17-2011All In
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/tjpBS1b_3Bg/all-in-01-11-2011
<span style="font-size: larger;"><span style="font-family: Verdana;">In poker, a player declares “all in” when he decides to bet all of his remaining chips on the cards in his hand. He then waits nervously while the remaining cards are dealt, knowing that he will soon either win big or lose all of his chips (“go bust”). <br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><img hspace="18" vspace="18" align="right" alt="All In" src="http://www.redcedartech.com/images/allinpokersmall.png" /></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">A similar gamble occurs when you apply some optimization approaches based on Design of Experiments (DOE)</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"> c</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">oncepts. In this case, the actual objective function</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"> being minimized is evaluated at a predetermined set of design points. Then, a simple approximation of the objective function is developed by fitting an analytical function to these points. This approximate function is often called a <em>response surface</em> (also a <em>surrogate function</em>). The optimization search is then performed on the response surface, because evaluations of this simpler function are usually much quicker than evaluations of the actual objective function. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
However, by defining all of your design evaluations ahead of time (going “all in”), you are risking that the corresponding response surface may not accurately represent the true objective function. If the surface fit is not accurate enough, then searching the response surface may not really give you the optimal design. In fact, it is common for an inaccurate response surface to completely mislead the optimization search, resulting in a very poor solution. So, while an accurate response surface could yield an optimized solution at lower cost than some other optimization approaches, a poorly fit surface may yield no useful results at all (you’ll “go bust”).</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">To determine the actual quality of your solution, you must evaluate the objective function at the point suggested by the response surface search. And, to find out if that point is really optimal, you may have to search more thoroughly by evaluating some additional neighboring designs.</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Here is the source of the risk. For a given objective function, you cannot predict whether fitting an assumed approximation function to a particular set of design points will produce an accurate response surface. This is because there is no reliable way to know ahead of time the shape and characteristics of the true (implicit) objective function. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"> <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Moreover, because you don’t know where the optimal solution is located in your design space, you must spread the predetermined design evaluation points somewhat evenly throughout the space. This maximizes the chances of representing each part of the space equally well in the response surface. At the same time, it practically guarantees that you are wasting a lot of design evaluations in regions of the design space that you don’t care about – where the solutions are suboptimal. The percentage of wasted evaluations is much greater in complex design spaces, for which high-order response surface functions must be used and more design evaluations are needed. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Within a sub-region of the design space, it takes many more design evaluations to fit an accurate surface than it does to figure out if that region should be explored further or ignored altogether. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">This is one reason iterative optimization methods are preferred for most problems. These methods “learn” as they go, and focus future design point evaluations in regions of the design space that have a higher chance of yielding better designs. Generally, this is a more efficient and smarter use of overall resources, especially when the design space is complex.</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">An “all in” scheme described above is often recommended when there are time and resource constraints on a design process. Yet, these are the precise conditions under which this scheme holds the highest risk. <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
In most cases, an efficient iterative approach can significantly increase your chances of finding optimized solutions within a given time constraint, while minimizing your risk of “going bust.”<br />
</span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/tjpBS1b_3Bg" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/all-in-01-11-2011#commentsTue, 11 Jan 2011 16:15:53 +0000Ron Averill53 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/all-in-01-11-2011The "Multi" in Multidisciplinary
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/v-EhXZQwlJA/the-multi-in-multidisciplinary-01-04-2011
<em><span style="font-size: larger;"><span style="font-family: Verdana;">Multi </span></span></em><span style="font-size: larger;"><span style="font-family: Verdana;">means “many” or “multiple.” Multidisciplinary design optimization (MDO) has become popular largely because it allows engineers to optimize over many different disciplines at the same time. <br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="Swiss army knife" src="http://www.redcedartech.com/images/multitool.jpg" /></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">For example, you can use MDO to simultaneously optimize a vehicle body for structural, aerodynamic, thermal and acoustic behaviors. In addition, you can directly include non-performance measures, such as cost and manufacturability, in the optimization statement. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">The key to accommodating so many disciplines is that MDO search methods are <em>discipline independent</em>. That is, they are not related to the physics of the problem in any way. Instead, MDO methods are either math-based or heuristic strategies that are used to search an unknown design space. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
Sometimes there is the misconception that MDO methods should only be used for problems that involve many disciplines. But, in fact, MDO methods are often the best approach for design problems that consider only a single discipline. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">This confusion can be avoided by taking a slightly different view. MDO methods can be applied to many different disciplines, whether one at a time, or several simultaneously. Because they are discipline independent, MDO methods can’t even distinguish between applications that involve a single discipline and those that are truly multidisciplinary. In this case, perhaps it’s true that ignorance is bliss. </span></span><br />
<br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/v-EhXZQwlJA" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/the-multi-in-multidisciplinary-01-04-2011#commentsTue, 04 Jan 2011 15:30:48 +0000Ron Averill52 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/the-multi-in-multidisciplinary-01-04-2011Black Box Optimization
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/lct3ZdJAtkU/black-box-optimization-12-14-2010
<span style="font-size: larger;"><span style="font-family: Verdana;">Engineers and scientists like to know how things work. They seem to be born with an inner drive to understand the fundamental nature of things. So, naturally, they may have some reservations about using an algorithm if the way it functions is not clear. <br />
<img align="right" alt="Black box" src="http://www.redcedartech.com/images/black box.jpg" /><br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">When we can’t see the details about how something works, we often refer to it as a <em>black box</em>. Input goes in and output comes out, without any knowledge of its internal workings. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><em><span style="font-size: larger;"><span style="font-family: Verdana;">Black box</span></span></em><span style="font-size: larger;"><span style="font-family: Verdana;"> sometimes has a negative connotation, because knowing how something works is usually a good thing. But if we evaluate the idea of a black box, we find that many common processes and tools – including the human brain – actually fall into this category. <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
For example, most users of the finite element method have some basic knowledge of its underlying mathematical theory. But many of the element types available in commercial software packages are based on advanced formulations that few users completely understand. These advanced formulations are necessary to overcome deficiencies in the element behavior, and users can apply them accurately without knowing all the mathematical formalities. There are many similar examples in computational mechanics. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">If we use an algorithm without knowing all the details about how it works, then we are using it like a black box, whether the details are available to us or not. We must know at least enough about an algorithm to be able to use it properly and effectively, but knowing more than this is generally unnecessary. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">To take this one step further, in mathematical optimization we often use a nested set of black box algorithms. At the center of the optimization process, the function evaluations have inputs (design variables) and outputs (objectives and constraints). We cannot easily view the complicated relationships, or the transfer functions, between these inputs and outputs, so we call these <em>implicit</em>, or black box, functions. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
Wrapped around these implicit function evaluations is an optimization algorithm, which determines the design variable values to use as inputs during the next function evaluation. Sometimes we know exactly how the algorithm works, so it is not a surprise when the optimization process behaves in a certain way. However, classical optimization algorithms are often unable to solve the types of challenging design problems that we face today, so a new generation of search algorithms has been developed to address these issues. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">With increasing frequency, these new algorithms are hybrid and adaptive, which allows them to achieve a superior level of efficiency and robustness compared to classical methods. But the behavior of these smarter algorithms is also more difficult for us to understand. They use multiple strategies simultaneously, and they behave differently on each new problem, as they adapt to the varying conditions during each stage of the search. Operationally, these search algorithms are like a black box. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">For engineers who are uncomfortable using a black box optimization algorithm to solve challenging design problems, the classical algorithms can still occasionally be used successfully if the problem is first simplified, and then cast in the proper form. But this preparation often requires a great deal of expertise, experience and time, not to mention a lot of brain power. And, after all, using the human brain is just relying on a different kind of black box…isn’t it?</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
</span></span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/lct3ZdJAtkU" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/black-box-optimization-12-14-2010#commentsTue, 14 Dec 2010 19:55:06 +0000Ron Averill50 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/black-box-optimization-12-14-2010The Limits of Intuition
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/RwJy0_KlC1g/the-limits-of-intuition-12-08-2010
<span style="font-size: larger; font-family: Verdana;">The human brain is capable of making quick and effortless judgments about people, objects or ideas that it has not previously encountered. This sort of unreasoned insight is often called intuition. In his article, “<a target="_blank" href="http://www.scientificamerican.com/article.cfm?id=powers-and-perils-of-intuition">The Powers and Perils of Intuition</a>” (<em>Scientific American MIND</em>, June 2007, pp 24–31), David Myers describes two types of influence that shape our intuition.</span><br />
<br />
<span style="font-size: larger; font-family: Verdana;"><img align="right" alt="Fork in the road" src="http://www.redcedartech.com/images/forkintheroad.png" />The first is the development of mental shortcuts, or heuristics, which allow us to make snap judgments, often correctly. For example, our intuition tells us that blurry objects are farther away than clear ones. This is often a helpful assumption, except that on foggy mornings, a car in front of you may be much closer than intuition tells you it is.</span><br />
<br />
<span style="font-size: larger; font-family: Verdana;">The second influence on intuition is “learned associations” or life experiences that guide our actions. This explains why we may be suspicious of a stranger who resembles someone who once threatened us, even if we do not consciously make the association. Similarly, an experienced engineer can often quickly solve a problem that resembles one he worked on many years ago, even if the details of that project are mostly forgotten. <br />
<br />
</span> <span style="font-size: larger; font-family: Verdana;">We spend our entire lives developing intuitive expertise that is based on our experiences. So should we trust our intuition, or should we lean more on deliberate, rational thought?</span><br />
<br />
<span style="font-size: larger; font-family: Verdana;">When it comes to engineering design, the answer is both. We certainly want to take advantage of all the related experience we have gained during our careers. But just as our experiences shape our intuition, they can also limit our ability to be innovative.</span><br />
<br />
<span style="font-size: larger; font-family: Verdana;">Two major limitations of intuition are described by Eric Bonabeau in his article, “<a target="_blank" href="http://people.icoserver.com/users/eric/hbr_gut.pdf">Don’t Trust Your Gut</a>,” (<em>Harvard Business Review</em>, May 2003):</span><br />
<br />
<ol>
<li><span style="font-size: larger; font-family: Verdana;">Intuition is not always good at evaluating options and solutions. This may be because people tend to fixate on their first idea. Or, because few of us can comprehend the interaction effects between many different components of a situation.</span></li>
<br />
<li><span style="font-size: larger; font-family: Verdana;">Intuition is never good at exploring alternatives. It is not helpful when seeking out original solutions.</span></li>
</ol>
<span style="font-size: larger; font-family: Verdana;">Intuition is good for making decisions when the current situation resembles a previous one, but not when the circumstances are very different. And clearly intuition is a poor tool for generating innovative solutions to complex problems. As Eric Bonabeau notes, “Intuition is a means not of assessing complexity but of ignoring it.” <br />
<br />
<span style="font-size: larger; font-family: Verdana;">Fortunately, we now have advanced optimization search algorithms that can help us overcome the shortcomings of our intuition. By broadly searching complex design spaces without bias or fatigue, modern optimization tools provide a rational mathematical process for exploring new concepts and for solving challenging design problems.</span><br />
<br />
<span style="font-size: larger; font-family: Verdana;">When used together, intuition and mathematical optimization tools enjoy a positive symbiotic relationship, with each one enhancing the performance, and making up for the weaknesses, of the other.</span><br />
<br />
<span style="font-size: larger; font-family: Verdana;">The use of automated mathematical optimization is certainly consistent with Albert Einstein’s counsel that, “One should never impose one's views on a problem; one should rather study it, and in time a solution will reveal itself." <br />
</span></span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/RwJy0_KlC1g" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/the-limits-of-intuition-12-08-2010#commentsWed, 08 Dec 2010 21:54:16 +0000Ron Averill49 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/the-limits-of-intuition-12-08-2010Race to the Bottom
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/tFiO_XIVzEI/race-to-the-bottom-11-29-2010
<span style="font-size: larger;"><span style="font-family: Verdana;">I have a great idea for a new reality adventure television series.<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
The basic premise is simple. Contestants are blindfolded and driven to a starting location on the side of a mountain. When the race begins, each contestant must find a path to the base of the mountain as quickly as possible. The blindfolds make it impossible for contestants to detect the contours and obstacles in the landscape.</span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><img hspace="6" vspace="6" align="right" alt="Race to the bottom " src="http://www.redcedartech.com/images/racetothebottom.png" /></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">When contestants are working alone, the strategies they can use are limited. If the terrain is smooth, like a rolling pasture, then contestants might find a successful path by taking small steps in several different directions, and then choosing the direction that leads downward. When the contestant feels that path starting to flatten out or trend upward, she knows it’s time to stop and choose a new downward direction. Repeating this process many times should lead each contestant to the bottom of the nearest valley, which depends on the starting location. The first one to the bottom wins!</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">To increase the difficulty, fences could be placed on the mountainside. Contestants could then be challenged to find a downward path that also stays within the fence line. If the fence line is really curvy, then the still-blindfolded contestants might get stuck in one of the curves with no way to move downward and no way to figure out whether they are at the lowest point in the landscape. The only hope of getting to an even lower point would be to start over from a different location. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">For contestants with strong ankles, the competition could be held on mountain ranges with very rocky terrains. On this uneven landscape, small steps wouldn’t necessarily provide useful information. So contestants would need to use a different strategy to select a downward path. We could imagine several possible strategies, with most of these involving semi-random steps of various sizes.</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Like most adventure reality series, this one could also be made into a team competition. Each blindfolded team member could be dropped off at a different location in the mountain range and provided with an altimeter and a GPS. Team members could communicate their elevations and locations to the team captain, who would then direct the movements of each player based solely on the collective information provided. Instead of finding the bottom of the nearest valley, the team’s goal would be to find the lowest point in the lowest valley within the entire mountain range – within the fence line. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Countless strategies could be used to systematically explore a mountain range with a team of blind-folded explorers, but none of these is guaranteed to find the absolute lowest point within the limited time allotted for a television program. So generally the team that finds the lowest relative point is declared the winner. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Numerous television executives have already rejected my “Race to the Bottom” reality series idea. Not melodramatic enough, they said. But I’ll keep trying until I’ve exhausted all the options. In the meantime, I’ll keep performing design optimization studies.</span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/tFiO_XIVzEI" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/race-to-the-bottom-11-29-2010#commentsMon, 29 Nov 2010 05:00:00 +0000Ron Averill48 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/race-to-the-bottom-11-29-2010Six Stages of Optimization Maturity
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/9pv_-ZyapLw/six-stages-of-optimization-maturity-11-22-2010
<span style="font-family: Verdana;"><span style="font-size: larger;">There are common stages that most companies pass through when improving their product design process. Each new level promotes greater efficiency and predictability of their process, as well as higher performance and innovation of their products. It is possible to skip one or more steps to reap faster rewards, but the most important thing is to keep moving higher. Which stage represents the optimization maturity of your organization? </span></span><br />
<br />
<span style="font-family: Arial;"><strong><span style="font-size: larger;">Stage 1. </span></strong><em><strong><span style="font-size: larger;">Physical prototyping: build and test </span></strong></em></span><span style="font-size: larger;"><span style="font-family: Arial;"><br />
</span></span><em><span style="font-size: larger;"><span style="font-family: Arial;"><img hspace="6" vspace="6" align="right" alt="Optimization maturity model" src="http://www.redcedartech.com/images/OMMdiagram.png" /></span></span></em><span style="font-family: Verdana;"><span style="font-size: larger;">A trial-and-error approach to building and testing a myriad of hardware prototypes makes it too expensive to consider many design alternatives. </span></span><br />
<br />
<strong><span style="font-size: larger;"><span style="font-family: Arial;">Stage 2. </span></span></strong><em><strong><span style="font-size: larger;"><span style="font-family: Arial;">CAD-assisted design </span></span></strong></em><span style="font-size: larger;"><span style="font-family: Arial;"><br />
</span><span style="font-family: Verdana;">Computer-aided design (CAD) software tools improve design drawing communications, but do not reduce physical build and test iterations.</span></span><br />
<br />
<strong><span style="font-size: larger;"><span style="font-family: Arial;">Stage 3. </span></span></strong><em><strong><span style="font-size: larger;"><span style="font-family: Arial;">Virtual prototyping</span></span></strong></em><span style="font-size: larger;"><span style="font-family: Arial;"><br />
</span><span style="font-family: Verdana;">Math-based analysis models such as FEA, CFD and MBD reduce design time and cost by replacing early stage physical prototypes with intuition-led virtual iterations.</span></span><br />
<br />
<strong><span style="font-size: larger;"><span style="font-family: Arial;">Stage 4. </span></span></strong><em><strong><span style="font-size: larger;"><span style="font-family: Arial;">Feasible design, then local optimization</span></span></strong></em><span style="font-size: larger;"><span style="font-family: Arial;"><br />
</span></span><span style="font-family: Verdana;"><span style="font-size: larger;">Manual iterations based on virtual prototypes produce an acceptable design that is then optimized locally to achieve incremental improvements.</span><br />
</span><br />
<strong><span style="font-family: Arial;"><span style="font-size: larger;">Stage 5. </span></span></strong><em><strong><span style="font-family: Arial;"><span style="font-size: larger;">System optimization</span></span></strong></em><span style="font-family: Arial;"><span style="font-size: larger;"><br />
</span><span style="font-family: Verdana;"><span style="font-size: larger;">Throughout the design process, manual iterations are nearly eliminated in favor of broad exploration using automated math-based optimization of design concepts, yielding much better solutions faster and at lower cost. Optimization is viewed as an important step in the process. </span></span></span><br />
<span style="font-size: larger;"><span style="font-family: Arial;"><br />
<strong>Stage 6.</strong><em><strong> Continuous innovation</strong></em><br />
</span><span style="font-family: Verdana;">Automated math-based optimization is the foundation of the process, upon which other steps are built. Successive intuition-assisted optimization studies drive the discovery of new design directions and the optimization of existing ones to realize innovative and game changing designs.</span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/9pv_-ZyapLw" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/six-stages-of-optimization-maturity-11-22-2010#commentsMon, 22 Nov 2010 19:08:32 +0000Bob Ryan47 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/six-stages-of-optimization-maturity-11-22-2010Optimization Doesn't Work
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/QrtVezCZYwE/optimization-doesnt-work-11-15-2010
<span style="font-size: larger;"><span style="font-family: Verdana;">Yes, this <em>is </em>an odd title for a blog post that is meant to promote optimization. But this opinion is expressed more often than you might think, especially among engineers who have tried to apply classical optimization technology to their challenging design problems. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><img align="right" alt="Square peg in a round hole" src="http://www.redcedartech.com/images/squarepeg.png" /></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">So, what is causing smart people to form this opinion? I believe there are four types of experiences that cause people to lose faith in optimization:</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><strong>1. The optimized solution was not as good as expected.</strong> </span></span><span style="font-family: Verdana;"> <br />
<div class="rteindent1"><span style="font-size: larger;"> Those big improvements you hoped for were not realized. But did you include all the key design variables and allow them to vary broadly enough to really improve the design? Often we are taught to reduce the number and range of the design variables to allow for the limitations of classical optimization algorithms. Modern search strategies don’t have these limitations; they can efficiently explore broader and more complex design spaces, with a higher chance of finding superior solutions.<br />
<br />
</span></div>
<div class="rteindent1"><span style="font-size: larger;"> Sometimes the optimized solution is not even as good as the baseline design you started with! This may happen, for example, when the search is performed using a poorly fit response surface. Response surface methods can work well in certain applications, but a lot of expertise and experimentation is often needed to use them properly. Direct search methods do not rely on response surfaces, so this type of error is eliminated. <br />
</span></div>
<div><span style="font-size: larger;"><strong>2. The optimization study could not be completed because some of the function evaluations failed. </strong> </span></div>
<div class="rteindent1"><span style="font-family: Verdana;"><span style="font-size: larger;"><br />
A failed function evaluation can be caused by an inability to generate a new math model for a particular design, non-convergence of the math model, or any other type of error during the analysis. These occur in a large percentage of problems.</span></span><span style="font-size: larger;"><br />
<br />
</span></div>
<div class="rteindent1"><span style="font-size: larger;"> </span><span style="font-size: larger;"> It is true that many DOE-based studies, and some types of search algorithms, cannot overcome even a single failed function evaluation. But robust optimization algorithms are not adversely affected even when numerous evaluations fail, as long as there are enough successful evaluations to conduct a meaningful search.</span></div>
<div><strong><br />
</strong><span style="font-size: larger;"><strong>3. A different solution was obtained in every run, depending on the starting design.</strong> </span></div>
<div class="rteindent1"><span style="font-family: Verdana;"><span style="font-size: larger;"><br />
Like a ball rolling down a hill, local optimization algorithms converge to the nearest local minimum (the lowest point in the valley). If two separate local searches start with designs that are in different valleys, then the final solutions from these two studies will be different. Unfortunately, it is often impossible to know how many valleys there are in a design space, and which valley a design lies in, prior to exploring the entire space. </span></span><span style="font-size: larger;"><br />
</span></div>
<div> </div>
<div class="rteindent1"><span style="font-size: larger;"> Another possible explanation is that the multiple optimization runs had not yet converged. The solutions were not the same, because a different path was taken in each run. An optimization algorithm that performs broad exploration, but does not fine tune the solutions locally, may not make any noticeable progress over many evaluations. So, it may just appear to be converged when it is not. <br />
</span></div>
<div> </div>
<div class="rteindent1"><span style="font-size: larger;"> Using an algorithm that performs global exploration and local optimization at the same time dramatically increases both the solution efficiency and the chances of finding the global optimal solution in each run.</span></div>
<div><span style="font-size: larger;"><strong><br />
4. The total time required to perform a sufficient number of evaluations was too large.</strong></span></div>
<div class="rteindent1"><span style="font-size: larger;"><br />
When each evaluation takes hours or days to complete, the total amount of CPU time for an optimization study can be quite large. In this case, the two most important features of an optimization algorithm are its efficiency and its ability to perform parallel evaluations. <br />
</span></div>
<div> </div>
<div class="rteindent1"><span style="font-size: larger;"> An algorithm’s efficiency is measured in terms of the total number of evaluations required to converge or to locate a solution of a certain quality. By this measure, the efficiency of various algorithms can typically vary by factors of 5 or 10, and sometimes by as high as 100 on a given problem. Aside from its ability to consistently find a good solution, an algorithm’s efficiency is its most important characteristic. <br />
</span></div>
<div> </div>
<div class="rteindent1"><span style="font-size: larger;"> When multiple CPUs and analysis software licenses are available, the speed of an optimization study can be increased by a factor equal to the number of CPUs available. For example, if ten machines and licenses are used, an optimization study can run ten times faster. Of course, the selected algorithm and the optimization software infrastructure must be capable of managing, and taking advantage of, multiple simultaneous evaluations. While some algorithms are still not capable of doing this, most modern optimization algorithms are developed to handle this requirement.</span></div>
<div><span style="font-size: larger;"><strong><br />
Conclusion: The search algorithm is the key</strong><br />
<br />
When well-defined optimization studies based on reasonable math models are not successful, the root cause is almost always the search algorithm. </span><span style="font-size: larger;"><span style="font-family: Verdana;">Selecting an algorithm that is inefficient, not robust, or inappropriate for your problem can lead to disappointing results. While this is no reason to broadly condemn optimization, the frustration caused by applying optimization technology unsuccessfully is certainly understandable. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Does optimization really work? Of course it does, provided that you use a suitable search algorithm on a well-defined problem. Industry-leading companies around the world demonstrate this every day on a myriad of challenging problems.<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
With today’s hybrid and adaptive search technology, unsuccessful optimization studies should soon be a distant memory.</span></span></div>
</span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/QrtVezCZYwE" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/optimization-doesnt-work-11-15-2010#commentsMon, 15 Nov 2010 15:31:39 +0000Ron Averill46 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/optimization-doesnt-work-11-15-2010Churn, Baby, Churn
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/Y7wrRlCfHkM/churn-baby-churn-11-08-2010
<strong><span style="font-size: medium;"><span style="font-family: Verdana;">C</span></span></strong><span style="font-size: larger;"><span style="font-family: Verdana;">onducting more design iterations can lead to higher-quality designs and increased innovation. So, when faced with a tight design schedule, the goal of many organizations is to iterate faster. But in most cases, performing faster manual </span></span><span style="font-size: larger;"><span style="font-family: Verdana;">design iterations doesn’t make the design process more productive. <br />
</span></span><br />
<span style="font-size: larger;"><span style="font-family: Verdana;"><img hspace="3" vspace="3" align="right" src="http://www.redcedartech.com/images/gears.png" alt="Gears turning" /></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Consider the consequences of maximizing iteration throughput for a typical manual design process. Let’s assume a s</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">imple, but familiar, scenario in which each iteration involves the following steps: </span></span><br />
<br />
<ol type="1">
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Create a CAD model of the geometry, </span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Build a math model to predict performance, </span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Execute the math model, and </span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Interpret its results. </span></span></li>
</ol>
<span style="font-size: larger;"><span style="font-family: Verdana;"> We may have been taught that the best way to maximize throughput is to break <br />
down each task and maximize its efficiency. We assign the right people and the right tools for every activity and maximize the productive use of all resources by eliminating downtime. With this plan in mind, we optimize each of the steps in our process:<br />
<br />
</span></span>
<ol>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">First, we refine the CAD modeling process. We organize teams of experienced design engineers to rapidly generate models of the system geometry for each design variation. As soon as one model is complete, the next iteration begins.</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Each set of CAD models is sent to our computer aided engineering (CAE) team, which builds math models (virtual prototypes) for each design variation. When possible, we automate the generation of meshed analysis models and parcel out model-building tasks to specialists to further increase efficiency.</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">We configure simulation hardware and software to match the complexity of our typical math models and to achieve the desired solution time. While the math model of one design is running, the next ones are being built.</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Periodically, the leadership team huddles to evaluate the results and determine how well the performance targets are being met. Then, new design directions are chosen, based largely on intuition, and the churning continues.</span></span></li>
</ol>
<span style="font-size: larger;"><span style="font-family: Verdana;"> In many organizations, each step in the manual iteration process has been refined to eliminate perceived inefficiencies. The CAD and CAE engineers perform their specialized and repetitive tasks with the efficiency of factory workers, and new design iterations move through the process quickly, like products moving down an assembly line. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">But even though we may have increased the efficiency of each step in the process, and maximized the productivity of each team member, the result is just a sizable gain in iteration throughput, not better designs. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Why? Because the individual or local components of the iteration process have been optimized, but the overall global process has not been improved. In fact, too often, the resulting global process does not deliver better designs at all. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">When faced with a challenging problem, the tendency is to break it up into manageable parts that we can solve separately. This often leads to a locally optimized solution or process that ignores the intricate and essential interactions among the parts. A better strategy is to focus on the interactions, and embrace the complexity of the situation, to uncover even greater advantages.</span></span><span style="font-family: Verdana;"><br />
<br />
</span><span style="font-size: larger;"><span style="font-family: Verdana;">In our present example, the separate steps are locally optimized based on a false definition of productivity. Instead of waiting for results of the current iteration to shape the direction of future designs, intuition-assisted ad-hoc decisions are often made for the sake of keeping team members busy. The independent steps remain out of sync, and the process never gains the traction it needs to propel the design toward its target. <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
Of even greater concern is our inability to interpret a growing amount of math data as the number of iterations increases. It should be clear that quickly performing many random design iterations is not a good idea, and this is certainly not the intention. Yet we use up most of our intuition during the first few iterations, so the later stages of our process often rely more heavily on trial-and-error than we would like to admit. <br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">So, more iterations won’t automatically lead to better designs. In order to benefit from these increased efforts, we must make better use of the knowledge created by previous iterations to intelligently navigate our search for an optimized solution. </span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">The point is not that higher iteration throughput is bad, but maximized manual churning should not be our primary goal. Instead of more designs in less time, we should be aiming for better designs, faster. There is a profound difference between these two goals.</span></span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/Y7wrRlCfHkM" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/churn-baby-churn-11-08-2010#commentsMon, 08 Nov 2010 14:29:03 +0000Ron Averill45 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/churn-baby-churn-11-08-2010No Soup for You!
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/CtG0OzqFrlA/no-soup-for-you-10-18-2010
<strong><span style="font-size: medium;">M</span></strong><span style="font-size: larger;">ade popular by the <em>Seinfeld </em>television series, the Soup Man restaurant in New York City demands that customers know what kind of soup they want before arriving at the counter. Signs are prominently displayed, stating the rules in several languages: </span><span><span>
<div class="rtecenter"><span><strong><img align="right" alt="Soup man" src="http://www.redcedartech.com/images/soupmansmall.jpg" />FOR THE MOST EFFICIENT AND FASTEST SERVICE <br />
</strong></span></div>
<div class="rtecenter"><span><strong> THE LINE MUST BE KEPT MOVING</strong></span></div>
<div class="rtecenter"><span><strong>Pick the soup you want! </strong></span><br />
<span><strong> Have your money ready!</strong></span><br />
<span><strong>Move to the <u>extreme</u> left after ordering!<br />
<br />
<span id="1286571089977E" style="display: none;"> </span></strong></span></div>
<span style="font-size: larger;">Failure to follow these rules may result in the harshest of penalties — no soup for you!</span><span><br />
<br />
<span style="font-size: larger;">Like the Soup Man restaurant, the optimization process is not well suited to those who don't have a clear set of goals in mind. </span><br />
<br />
<span style="font-size: larger;">If we were to post a sign that stated the rules for preparing to do an optimization study, it might look like this:</span><br />
</span></span><span>
<div class="rtecenter"><strong>FOR THE MOST EFFICIENT AND EFFECTIVE SEARCH <br />
THE GOALS OF THE STUDY MUST BE DEFINED.</strong></div>
<div class="rtecenter"><strong>Pick your objective(s)!</strong><br />
<strong>Define the necessary constraints!</strong><br />
<strong>Select your design variables!</strong><br />
<u><strong>Then</strong></u><strong> build the needed math models!</strong></div>
<div class="rtecenter">Key ingredients are defined below*</div>
</span> <span><br />
<br />
</span><span style="font-size: larger;">This initial step of defining goals, variables and math models seems straightforward, but it often requires serious thought and domain expertise. Lively debate among members of the design team is common during this process.</span><br />
<br />
<span style="font-size: larger;">Often the biggest hurdle to completing this planning step is our eagerness to start building the first set of math models to test our latest brainstorm. Why bother arguing about specific goals and properly defining the design space when we already have a great solution in mind that just might work? Then, after several failed manual iterations, we usually return to the planning table, short on new ideas and even shorter on time. No soup for you!</span><br />
<br />
<span style="font-size: larger;">Ideally, we shouldn’t begin building math models until we understand our goals and the parameters we will vary to meet those goals. Only then can we build suitable parametric models that fully capture the design intent and predict the system responses that correspond to our objectives and constraints. </span><br />
<span style="font-size: larger;"><br />
Another common hurdle is deciding on the real goals. It is not usually possible to minimize or maximize everything in a system without violating the laws of physics. Some goals are bound to conflict with one another. You will often need to compromise by selecting certain goals as objectives while defining others as constraints. For example, instead of minimizing cost and maximizing performance, an optimization study might aim to find the lowest cost solution (the objective) that meets a minimum allowable level of performance (the constraint).</span><br />
<br />
<span style="font-size: larger;">When you can easily define the required level of a system property or response, then it is natural to categorize this condition as a constraint. If you cannot easily determine the desired maximum or minimum value for multiple responses, you now have the option to perform a multi-objective optimization study that predicts the trade-offs among two or more objectives. </span><br />
<br />
<span style="font-size: larger;">Instead of converging toward a single design based on a single objective, a multi-objective study yields a set of optimal designs that lie on a curve (for two objectives) or a surface (for three or more objectives), called a Pareto front. You can then select the design on the Pareto front that best meets your overall goals, with confidence that the most appropriate trade-offs have been considered for the current situation. </span><br />
<br />
<span style="font-size: larger;">The additional information provided by a multi-objective optimization study can be extremely helpful in understanding your system and in making design decisions. But a multi-objective study requires additional computational resources, so it should only be used when the trade-offs are desired or when you really cannot decide ahead of time on a single objective.</span> <br />
<br />
<span style="font-size: larger;">Clearly this approach should never be used when visiting the Soup Man. </span><hr />
<div><br />
* Key ingredients are defined here: <br />
<ul>
<li><strong>Objectives </strong>are the properties or responses of your system that need to be minimized or maximized. For example, you may want to minimize cost or maximize performance.</li>
<li><strong>Constraints </strong>are the properties or responses of your system that must comply with specified limits. For example, the durability of the system may need to be at least as good as last year’s model. An even greater durability is allowed, but not required.</li>
<li><strong>Variables </strong>are the parameters in your system that can be changed in order to satisfy the objective(s) and constraints. For example, you may allow the material and gage thickness to change in four different parts, resulting in eight system design variables.</li>
<li><strong>Math Models</strong> are the predictive analysis models that are used to measure how good each design is. For example, for each set of design variable values you might use a finite element model to predict the thermal stresses of a structure, and a spreadsheet model to predict its cost. Developing the math models is not part of this step, but agreeing upon what types of models are needed is.</li>
</ul>
</div>
</span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/CtG0OzqFrlA" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/no-soup-for-you-10-18-2010#commentsMon, 18 Oct 2010 14:47:31 +0000Ron Averill29 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/no-soup-for-you-10-18-2010Parsimonious Optimization
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/9kZJ2l3h_KY/parsimonious-optimization-10-11-2010
<div><span style="font-size: medium;"><strong>T</strong></span><span style="font-family: Verdana;"><span style="font-size: larger;">he word <em>parsimonious </em>means thrifty, economical, frugal, and sometimes even stingy. It is an unusual word to use when describing optimization, but it is meaningful here in two ways. <br />
</span><span style="font-size: larger;"><br />
First, the purpose of optimization is to minimize a function. The intent is not a meager reduction, but absolute minimization. This seems pretty stingy, but in a useful way. <br />
</span><br />
<img height="256" width="208" align="right" src="http://www.redcedartech.com/images/timeismoney.png" alt="Time is money" /><span style="font-size: larger;">Second, and of greater interest here, the process of optimization must be efficient, economical, and thrifty. That is, finding an optimized solution to a problem should take as little of your time and resources as possible. Unfortunately, in many real-world applications, time is the largest barrier to realizing the true value of automated optimization. </span><br />
<br />
<span style="font-size: larger;">In theory, you should be able to find an optimized solution whenever you have a good system analysis model and an appropriate search algorithm. But if the model requires hours or even days of CPU time for each design evaluation, and the algorithm requires a large number of evaluations, then the total time required to reach that optimized solution may turn out to be completely impractical. </span><br />
<br />
<span style="font-size: larger;">Let’s consider the cost factors involved. The total CPU time needed to find an optimized solution is defined this way:</span><br />
<img src="http://www.redcedartech.com/images/equationparsimonious.png" alt="" /><br />
<br />
<span style="font-size: larger;">where NSOL is the number of optimization solutions performed, and the expression inside the brackets represents the total search time per iteration of the optimization solution. <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;">Based on this formula, there are only four ways you can reduce the solution time for an optimization study. You can<br />
<br />
</span></span></div>
<ol>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Perform multiple evaluations at the same time (in parallel),</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Perform shorter evaluations,</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Perform fewer evaluations, or</span></span></li>
<li><span style="font-size: larger;"><span style="font-family: Verdana;">Perform fewer solution iterations.</span></span></li>
</ol>
<span style="font-size: larger;"><span style="font-family: Verdana;">It is easy to parallelize the search or the evaluations if you have extra CPUs and analysis software licenses. We recommend this when possible. But adding extra resources is not considered parsimonious (which is the topic of this post!), so we’ll discuss parallel evaluations another time.</span></span><span style="font-size: larger; font-family: Verdana;"><br />
<br />
</span><span style="font-size: larger; font-family: Verdana;">Next, you could use simplified models to shorten the time per evaluation, but this can be risky. An inaccurate model can mislead the optimization search and give you disappointing results. So, we will assume for now that it is not possible to reduce model complexity or evaluation time.<br />
<br />
</span><span style="font-size: larger; font-family: Verdana;">The two remaining parameters –number of evaluations and number of solution iterations – will have the greatest impact on your solution efficiency. Yet you have the least amount of control over these, because they are determined mainly by the available optimization technology. </span><span style="font-size: larger; font-family: Verdana;"><br />
<br />
</span><span style="font-size: larger; font-family: Verdana;">The number of evaluations needed to find an optimal design, or a design of a specified performance level, is entirely determined by the efficiency of the search algorithm. To solve a given problem, the number of evaluations required by different algorithms can vary by a factor of 2, 10 and even 100. So the time to solution can differ by days, weeks or months of CPU time. This can greatly impact your ability to meet deadlines.</span><span style="font-size: larger; font-family: Verdana;"><br />
<br />
</span><span style="font-size: larger; font-family: Verdana;">Clearly, it is important to choose the most efficient algorithm for a problem, but this is one of the biggest inefficiencies of the entire process. It is very difficult, and sometimes impossible, to know what type of algorithm is best for a particular problem. Moreover, most algorithms have a set of tuning parameters that you must define to control the performance of the algorithm. Sure, you could use the default settings, but these are seldom optimal. </span><span style="font-size: larger; font-family: Verdana;"><br />
</span><span style="font-size: larger; font-family: Verdana;"><br />
So, it often becomes necessary to solve the problem multiple times using a variety of algorithms, tuning parameters, and starting conditions. The number of solution iterations might range from one to five or more, depending on the complexity of the problem. For problems with expensive models, this approach is intractable. </span><br />
<span style="font-size: larger; font-family: Verdana;"><br />
The irony is that optimization is meant to remove inefficiencies from engineering designs and manufacturing processes, and yet many of the existing optimization tools promote a process that is too inefficient to be used for some of the most important applications.<br />
<br />
</span><span style="font-size: larger; font-family: Verdana;">In order to realize the true potential of automated optimization, engineers and scientists must embrace a new generation of optimization technology that is inherently more efficient…and perhaps even parsimonious.</span><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/9kZJ2l3h_KY" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/parsimonious-optimization-10-11-2010#commentsMon, 11 Oct 2010 16:16:16 +0000Ron Averill23 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/parsimonious-optimization-10-11-2010Upside
http://feedproxy.google.com/~r/OptimizationViewpoints/~3/LdHg1b8NTz8/upside-10-04-2010
<span style="font-size: medium;"><span style="font-family: Verdana;"><strong>H</strong></span></span><span style="font-size: larger;"><span style="font-family: Verdana;">igh school tryouts. College recruiting. Pro draft day. At every level of athletics, coaches face a tough pre-season decision. Select player A, a better than average athlete who has worked diligently for many years to maximize his potential under the tutelage of top coaches. Or choose player B, whose present skill level is not quite as high but who has greater raw athletic ability, is coachable and has real potential to be a superstar – a trait that coaches call “upside.” <br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
<img align="right" alt="baseball player" src="http://www.redcedartech.com/images/baseball.png" />A similar situation occurs in product development when selecting between two competing design concepts. A more optimized version of concept A may appear better than a version of concept B that has not been optimized. But concept B may have a lot more potential for improvement, a bigger upside.<br />
</span></span><span style="font-size: larger;"><span style="font-family: Verdana;"><br />
The performance of a single example of a concept is not usually a good measure of the concept itself.<br />
</span></span><span style="font-size: larger;"><br />
How optimized is concept A? What level of performance could be attained by concept B? We seldom know the answers to these questions prior to performing an optimization study. <br />
</span><span style="font-size: larger;"><br />
In general, it is only possible to accurately evaluate two different design concepts when both have been optimized. Not necessarily fully optimized, but at least enough to project the performance of a near optimal version of each.</span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">For example, material selection studies are often performed while holding the geometry fixed. The geometry was most likely developed for a particular material based on its density, strength and stiffness properties. If a new material with different properties is substituted within that same geometry, the performance of the design should not be expected to be as good as it might be. A modified geometry that takes full advantage of the new material’s attributes is needed to demonstrate the full potential of the material in that application. </span></span><br />
<br />
<span style="font-size: larger;"><span style="font-family: Verdana;">Materials companies understand very well that they need optimized designs to show their products in the best light. The same is true for most other ideas, concepts and athletes.</span></span><br /><img src="http://feeds.feedburner.com/~r/OptimizationViewpoints/~4/LdHg1b8NTz8" height="1" width="1" alt=""/>http://www.redcedaru.com/blog/upside-10-04-2010#commentsMon, 04 Oct 2010 15:44:26 +0000Ron Averill10 at http://www.redcedaru.comhttp://www.redcedaru.com/blog/upside-10-04-2010