<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-319304714858376628</id><updated>2024-11-01T08:44:06.136+01:00</updated><category term="RMMExample"/><category term="compressedsensing"/><category term="UncertaintyQuantification"/><category term="eph"/><category term="grouptesting"/><category term="site instructions"/><category term="VV"/><category term="dimensionalityreduction"/><category term="space"/><category term="what is RMM?"/><title type='text'>The Robust Mathematical Modeling Blog</title><subtitle type='html'>...When modeling Reality is not an option.</subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default?redirect=false'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>23</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-8489429001728073167</id><published>2012-04-08T21:12:00.001+02:00</published><updated>2012-04-08T21:12:41.417+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="compressedsensing"/><category scheme="http://www.blogger.com/atom/ns#" term="UncertaintyQuantification"/><category scheme="http://www.blogger.com/atom/ns#" term="VV"/><title type='text'>Mathematical Foundations of V&amp;V Pre-publication NAS Report</title><content type='html'>&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj1BJibSNXZNNYuF-VnT-hnDE60rmck1WlyuFqkC8zKS_t84Zt434v_l5cYWdz6ce_oXU0gBm_XiqpgRyhvyJmxBUbUDyb9bS2g9qGEGUCI6jLQ2BCD9gry0aENMd6svNXxpl1VLup4WA/s1600/examplePCCS-heat-transfer.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;291&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj1BJibSNXZNNYuF-VnT-hnDE60rmck1WlyuFqkC8zKS_t84Zt434v_l5cYWdz6ce_oXU0gBm_XiqpgRyhvyJmxBUbUDyb9bS2g9qGEGUCI6jLQ2BCD9gry0aENMd6svNXxpl1VLup4WA/s400/examplePCCS-heat-transfer.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
From the &lt;a href=&quot;http://www.variousconsequences.com/2012/03/mathematical-foundations-of-v-pre-pub.html&quot;&gt;Various Consequences Blog&lt;/a&gt;, I found that the National Academy Press is about to release a report on &amp;nbsp;the &lt;a href=&quot;http://www.variousconsequences.com/2011/01/mathematical-science-foundations-of.html&quot;&gt;Mathematical Foundations of Validation, Verification and Uncertainty Quantification&lt;/a&gt;. The &lt;a href=&quot;https://download.nap.edu/catalog.php?record_id=13395&quot;&gt;pre-publication version&lt;/a&gt;&amp;nbsp;is available on the National Academies Press site.&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
While reading the pre-publication version, I noted that the study did not seem to make reference to&amp;nbsp;the (too !) recent connection between &lt;a href=&quot;http://nuit-blanche.blogspot.com/2011/11/compressive-sensing-and-uncertainty.html&quot;&gt;Compressive Sensing and Uncertainty Quantification&lt;/a&gt;&amp;nbsp;as pointed out by &lt;a href=&quot;http://www.colorado.edu/aerospace/doostan_alireza.html&quot;&gt;Alireza Doostan&lt;/a&gt;.. If you recall, his most recent presentations on the subject include:&lt;/div&gt;
&lt;a href=&quot;http://www.csm.ornl.gov/workshops/applmath11/documents/talks/Doostan_plenary.pdf&quot;&gt;&lt;/a&gt;&lt;ul&gt;&lt;a href=&quot;http://www.csm.ornl.gov/workshops/applmath11/documents/talks/Doostan_plenary.pdf&quot;&gt;&lt;/a&gt;
&lt;li style=&quot;text-align: justify;&quot;&gt;&lt;a href=&quot;http://www.csm.ornl.gov/workshops/applmath11/documents/talks/Doostan_plenary.pdf&quot;&gt;&lt;/a&gt;&lt;a href=&quot;http://www.csm.ornl.gov/workshops/applmath11/documents/talks/Doostan_plenary.pdf&quot;&gt;Stochastic PDEs, Sparse Approximations, and Compressive Sampling&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.samsi.info/sites/default/files/Doostan_november2011.pdf&quot;&gt;A Compressive Sampling Approach to Sparse Polynomial Chaos Expansions&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;a href=&quot;http://www.colorado.edu/aerospace/doostan_alireza.html&quot;&gt;Alireza&amp;nbsp;&lt;/a&gt;applies this technique to speed up the finding of the largest coefficients of a polynomial chaos expansion.&amp;nbsp;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgVNBpMlsK4MhhLtUt2b6qZ7nqoc7OClqsmiPOa8vWQKPrkp7Yv1lklNISkNQUiH4bXusEBkcRH4-otsgbcFBVVHfhyphenhyphenlLaNQP6SthZLSHJWAYoUngm3ZFg5CtDJeJ4dOMmfPVhKWI5Ilg/s1600/chaos-coefficients-temperature.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;278&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgVNBpMlsK4MhhLtUt2b6qZ7nqoc7OClqsmiPOa8vWQKPrkp7Yv1lklNISkNQUiH4bXusEBkcRH4-otsgbcFBVVHfhyphenhyphenlLaNQP6SthZLSHJWAYoUngm3ZFg5CtDJeJ4dOMmfPVhKWI5Ilg/s400/chaos-coefficients-temperature.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
Also from the&amp;nbsp;&lt;a href=&quot;http://www.variousconsequences.com/2012/03/mathematical-foundations-of-v-pre-pub.html&quot;&gt;Various Consequences Blog&lt;/a&gt;&amp;nbsp;there was this&amp;nbsp;&lt;a href=&quot;http://vv.nd.edu/&quot;&gt;V&amp;amp;V Workshop at Notre Dame&lt;/a&gt; late last year. Abstracts are &lt;a href=&quot;http://www.nd.edu/~powers/vv.abstracts/program.pdf&quot;&gt;here&lt;/a&gt;.&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/8489429001728073167/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/8489429001728073167' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8489429001728073167'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8489429001728073167'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2012/04/mathematical-foundations-of-v-pre.html' title='Mathematical Foundations of V&amp;V Pre-publication NAS Report'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj1BJibSNXZNNYuF-VnT-hnDE60rmck1WlyuFqkC8zKS_t84Zt434v_l5cYWdz6ce_oXU0gBm_XiqpgRyhvyJmxBUbUDyb9bS2g9qGEGUCI6jLQ2BCD9gry0aENMd6svNXxpl1VLup4WA/s72-c/examplePCCS-heat-transfer.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-5230971611389840927</id><published>2012-02-06T15:37:00.002+01:00</published><updated>2012-02-06T15:37:34.703+01:00</updated><title type='text'>Statistically Discernable ?</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;
Andrew Gelman started a good discussion on his blog in&amp;nbsp;&lt;a href=&quot;http://andrewgelman.com/2012/02/the-inevitable-problems-with-statistical-significance-and-95-intervals/&quot;&gt;The inevitable problems with statistical significance and 95% intervals&lt;/a&gt;. The comments are, as usual, right on the money.&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/5230971611389840927/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/5230971611389840927' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5230971611389840927'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5230971611389840927'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2012/02/statistically-discernable.html' title='Statistically Discernable ?'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-7936150973950656045</id><published>2011-12-28T22:09:00.000+01:00</published><updated>2011-12-28T22:11:29.286+01:00</updated><title type='text'>Why Economics Needs Data Mining</title><content type='html'>&lt;center&gt;&lt;iframe src=&quot;http://ineteconomics.org/ivideo?v=oYFjDt4-hFw&amp;size=medium&quot; width=&quot;400&quot; height=&quot;225&quot; border=&quot;0&quot;&gt;&lt;/iframe&gt;&lt;/center&gt;

From Mathbabe&#39;s &lt;a href=&quot;http://mathbabe.org/2011/12/28/economist-versus-quant/&quot;&gt;Economist versus quant&lt;/a&gt; (Video is featured in &lt;a href=&quot;http://ineteconomics.org/video/30-ways-be-economist/cosma-shalizi-why-economics-needs-data-mining&quot;&gt;INET&#39;s website&lt;/a&gt;)</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/7936150973950656045/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/7936150973950656045' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/7936150973950656045'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/7936150973950656045'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2011/12/why-economics-needs-data-mining.html' title='Why Economics Needs Data Mining'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-6911182201836025744</id><published>2011-11-27T00:34:00.000+01:00</published><updated>2011-11-27T00:34:16.378+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="eph"/><title type='text'>How biased are maximum entropy models?</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;From &lt;a href=&quot;http://yaroslavvb.blogspot.com/2011/11/interesting-papers-coming-up-at-nips11.html&quot;&gt;Yaroslav&#39;s blog&lt;/a&gt;, this is of interest to the&amp;nbsp;&lt;a href=&quot;http://scmsa.pagesperso-orange.fr/RMM/RMM_EPH.htm&quot;&gt;Experimental Probabilistic Hypersurface&lt;/a&gt;&amp;nbsp;approach which computes the &lt;a href=&quot;http://www.mtm.ufsc.br/~taneja/book/node14.html&quot;&gt;probability distribution.that maximizes entropy&lt;/a&gt;&amp;nbsp;:for difficult to compute models (read too long to run on a computer). Here is the paper:&amp;nbsp;&lt;a href=&quot;http://www.gatsby.ucl.ac.uk/~pel/papers/MackeEtalNIPS2011MaxEnt.pdf&quot; style=&quot;text-align: -webkit-auto;&quot;&gt;How biased are maximum entropy models?&lt;/a&gt; by&amp;nbsp;&lt;span class=&quot;Apple-style-span&quot; style=&quot;text-align: -webkit-auto;&quot;&gt;Jakob H. Macke,&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;text-align: -webkit-auto;&quot;&gt;Iain Murray,&amp;nbsp;&lt;/span&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;text-align: -webkit-auto;&quot;&gt;Peter E. Latham. The abstract reads:&lt;/span&gt;&lt;/div&gt;&lt;blockquote class=&quot;tr_bq&quot; style=&quot;text-align: justify;&quot;&gt;Maximum entropy models have become popular statistical models in neuroscience&amp;nbsp;and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models ﬁt to&amp;nbsp;small data sets can be subject to sampling bias; i.e. the true entropy of the data can&amp;nbsp;be severely underestimated. Here we study the sampling properties of estimates&amp;nbsp;of the entropy obtained from maximum entropy models. We show that if the data&amp;nbsp;is generated by a distribution that lies in the model class, the bias is equal to the&amp;nbsp;number of parameters divided by twice the number of observations. However, in&amp;nbsp;practice, the true distribution is usually outside the model class, and we show here&amp;nbsp;that this misspeciﬁcation can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of&amp;nbsp;model class, and we illustrate our results using numerical simulations of an Ising&amp;nbsp;model; i.e. the second-order maximum entropy distribution on binary data.&lt;/blockquote&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/6911182201836025744/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/6911182201836025744' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/6911182201836025744'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/6911182201836025744'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2011/11/how-biased-are-maximum-entropy-models.html' title='How biased are maximum entropy models?'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-6724131775588092005</id><published>2011-11-25T11:37:00.001+01:00</published><updated>2011-11-25T17:22:53.249+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="UncertaintyQuantification"/><title type='text'>Uncertainty Quantification at the Statistical and Applied Mathematical Sciences Institute</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;Here are different&lt;a href=&quot;http://www.samsi.info/communications/videos&quot;&gt;&amp;nbsp;videos available from the&amp;nbsp;&lt;span class=&quot;Apple-style-span&quot; style=&quot;background-color: white; color: #333333; font-family: Arial, sans-serif; font-size: 13px; line-height: 18px;&quot;&gt;Statistical and Applied Mathematical Sciences Institute&lt;/span&gt;&amp;nbsp;at Duke University&lt;/a&gt;&amp;nbsp;that features several workshops on &lt;a href=&quot;http://www.samsi.info/programs/2011-12-program-uncertainty-quantification&quot;&gt;Uncertainty Quantification:&lt;/a&gt;&amp;nbsp;(see their&amp;nbsp;&lt;a href=&quot;http://www.samsi.info/programs/2011-12-program-uncertainty-quantification&quot;&gt;2011-12 Program on Uncertainty Quantification&lt;/a&gt;), enjoy:&lt;/div&gt;(there is an entry on Nukt Blanche pointing to the connection with &lt;a href=&quot;http://nuit-blanche.blogspot.com/2011/11/compressive-sensing-and-uncertainty.html&quot;&gt;compressive sensing and uncertainty quantification&lt;/a&gt; )&lt;br /&gt;
&lt;div&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;br /&gt;
&lt;div class=&quot;views-row views-row-1 views-row-odd views-row-first&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/adrian-sandu&quot; style=&quot;color: #003366;&quot;&gt;Adrian Sandu&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;October 14, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Dr. Adrian Sandu - tutorial lecture on Data Assimilation for Uncertainty Quantification&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-2 views-row-even&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/habib-najm&quot; style=&quot;color: #003366;&quot;&gt;Habib Najm&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;October 14, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Habib Najm&#39;s tutorial lecture on Foundations for Uncertainty Quantification&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-3 views-row-odd&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/peter-kitanidis&quot; style=&quot;color: #003366;&quot;&gt;Peter Kitanidis&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;October 14, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Peter Kitanidis Inverse Problem and Calibration Uncertainty Quantification tutorial lecture&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-4 views-row-even&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/susie-bayarri&quot; style=&quot;color: #003366;&quot;&gt;Susie Bayarri&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;October 14, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Susie Bayarri&#39;s tutorial lecture on Representation and Propagation of Uncertainty&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-5 views-row-odd&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/dan-cooley-statistics-extremes&quot; style=&quot;color: #003366;&quot;&gt;Dan Cooley: Statistics of Extremes&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;September 9, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Dan Cooley: Statistics of Extremes (Tutorial talk)&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-6 views-row-even&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/dr-adrian-sandu-variational-data-assimilation-part-123-and-4&quot; style=&quot;color: #003366;&quot;&gt;Dr. Adrian Sandu: Variational Data Assimilation - Part 1,2,3 and 4&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;June 20, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Uncertainty Quantification Summer School presentation by Dr. Adrian Sandu: Variational Data Assimilation&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-7 views-row-odd&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/dr-dan-cooley-statistical-analysis-rare-events-parts-123-and-4&quot; style=&quot;color: #003366;&quot;&gt;Dr. Dan Cooley: Statistical Analysis of Rare Events - Parts 1,2,3 and 4&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;June 20, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Uncertainty Quantification Summer School presentation by Dr. Dan Cooley: Statistical Analysis of Rare Events&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-8 views-row-even&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/dr-dongbin-xiu-sensitivity-analysis-and-polynomial-chaos-differential-equations-parts&quot; style=&quot;color: #003366;&quot;&gt;Dr. Dongbin Xiu: Sensitivity Analysis and Polynomial Chaos for Differential Equations - Parts 1,2,3 and 4&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;June 20, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Uncertainty Quantification Summer School presentation by Dr. Dongbin Xiu: Sensitivity Analysis and Polynomial Chaos for Differential Equations&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-9 views-row-odd&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/dr-doug-nychka-data-assimilation-and-applications-climate-modeling-parts-12-and-3&quot; style=&quot;color: #003366;&quot;&gt;Dr. Doug Nychka: Data Assimilation and Applications in Climate Modeling - Parts 1,2 and 3&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;June 20, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Uncertainty Quantification Summer School presentation by Dr. Doug Nychka: Data Assimilation and Applications in Climate Modeling&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-row views-row-10 views-row-even views-row-last&quot; style=&quot;background-color: white; border-top-color: rgb(204, 226, 234); border-top-style: solid; border-top-width: 1px; clear: both; color: #333333; font-family: Arial, Helvetica, sans-serif; font-size: 13px; line-height: 18px; margin-top: 1em; padding-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;views-field-title&quot; style=&quot;font-size: 14px; font-weight: bold; font: normal normal bold 13px/18px Arial, sans-serif;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;a href=&quot;http://www.samsi.info/communications/nychka-public-lecture&quot; style=&quot;color: #003366;&quot;&gt;Nychka Public Lecture&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-post-date-value&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span class=&quot;field-content&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: url(http://www.samsi.info/sites/all/themes/samsi/images/global/calendar-icon.png); background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; font-size: 11px; font-weight: bold; padding-left: 14px;&quot;&gt;March 2, 2011&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;&lt;div class=&quot;views-field-field-summary-value&quot;&gt;&lt;div class=&quot;field-content&quot;&gt;&lt;div style=&quot;font: normal normal normal 13px/18px Arial, sans-serif; margin-bottom: 1em; margin-top: 1em;&quot;&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Dr. Douglas Nychka,Director of the Institute of Mathematics Applied to Geosciences for the National Center for Atmospheric Research (NCAR), spoke to an audience on February 15 about climate change.&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;/div&gt;&lt;h3 style=&quot;background-color: white; background-image: none; color: #0d5512; font-size: 1.3em; font: normal normal bold 15px/19px &#39;Trebuchet MS&#39;, Arial, sans-serif; line-height: 1.3em; margin-bottom: 15px; margin-left: 0px; margin-right: 0px; margin-top: 2.5em; padding-left: 0px;&quot;&gt;&lt;span class=&quot;date-display-single&quot; style=&quot;background-attachment: initial; background-clip: initial; background-color: transparent; background-image: none; background-origin: initial; background-position: 0px 1px; background-repeat: no-repeat no-repeat; display: block; font-size: 11px; font: normal normal bold 15px/19px &#39;Trebuchet MS&#39;, Arial, sans-serif; margin-top: 2.5em; padding-left: 0px; text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/span&gt;&lt;/h3&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/6724131775588092005/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/6724131775588092005' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/6724131775588092005'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/6724131775588092005'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2011/11/uncertainty-quantification-at.html' title='Uncertainty Quantification at the Statistical and Applied Mathematical Sciences Institute'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-3483954260941936954</id><published>2011-10-14T21:49:00.000+02:00</published><updated>2011-10-14T21:49:37.664+02:00</updated><title type='text'>IFIP Working Conference on Uncertainty Quantification in Scientific Computing</title><content type='html'>I just came across the following presentations at the &lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides.html&quot;&gt;IFIP Working Conference on&amp;nbsp;Uncertainty Quantification in Scientific Computing&lt;/a&gt; held at the&amp;nbsp;Millennium Harvest House in&amp;nbsp;Boulder, on&amp;nbsp;August 1-4, 2011. Here are the talks and some abstracts:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 24pt;&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 14pt;&quot;&gt;Part&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 14pt;&quot;&gt;&amp;nbsp;I: Uncertainty Quantification Need: Risk, Policy, and Decision Making&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Keynote Address&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Pascual.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;Uncertainties in Using Genomic Information to Make Regulatory Decisions&lt;/a&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Pasky Pascual, Environmental Protection Agency, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;In 2007, the U.S. National Academy of Sciences issued a report, &quot;Toxicity Testing in the 21st Century: a Vision and a Strategy,&quot; which proposed a vision and a roadmap for toxicology by advocating the use of systems-oriented, data-driven predictive models to explain how toxic chemicals impact human health and the environment. The report noted the limitations of whole animal tests that have become the standard basis for risk assessments at the U.S. Environmental Protection Agency. That same year, in response to the recall of the pain-killing drug, Vioxx, Congress passed the the Food and Drug Administration Act (FDAA).Vioxx had been approved for release by the U.S. government, and only belatedly was it discovered that the drug increased the risk of heart disease. This presentation suggests that these two events anticipate the need to build on developments in genomics, cellular biology, bioinformatics and other fields to craft predictive models that provide the rationale for regulating risks to public health and the environment. It suggests that both are a step in the right direction, but that long-standing issues of uncertainty in scientific inference must be more widely appreciated and understood, particularly within the regulatory system, if society hopes to capitalize on these scientific advances.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Cunningham.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Considerations of Uncertainties in Regulatory Decision Making&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Mark Cunningham, Nuclear Regulatory Commission, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;In early 2011, a task force was established within the Nuclear Regulatory Commission (NRC) to develop proposals for a long-term vision on using risk information in its regulatory processes. This task force, established by NRC&#39;s Chairman Jaczko, is being led by Commissioner Apostolakis, and has a charter to &quot;develop a strategic vision and options for adopting a more comprehensive and holistic risk-informed, performance-based regulatory approach for reactors, materials, waste, fuel cycle, and transportation that would continue to ensure the safe and secure use of nuclear material.&quot; This presentation will discuss some of the issues being considered by the task force in the context of how to manage the uncertainties associated with unlikely but potentially high consequence accident scenarios.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Pasanini.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;An Industrial Viewpoint on Uncertainty Quantification in Simulation: Stakes, Methods, Tools, Examples&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Alberto Pasanisi, Electricité de France, France&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Simulation is nowadays a major tool in industrial R&amp;amp;D and engineering studies. In industrial practice, in both design and operating stages, the behavior of a complex system is described and forecasted by a computer model, which is, most of time, deterministic. Yet, engineers coping with quantitative predictions using deterministic models deal actually with several sources of uncertainties affecting the inputs (and eventually the model itself) which are transferred to the outputs, i.e. the outcomes of the study. Uncertainty quantification in simulation has gained more and more importance in the last years and it has now become a common practice in several industrial contexts. In this talk we will give an industrial feedback and viewpoint on this question. After a reminder of the main stakes related to uncertainty quantification and probabilistic computing, we will particularly insist on the specific methodology and software tools which have been developed for dealing with this problem. Several examples, concerning different physical framework, different initial questions and different mathematical tools, will complete this talk.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Living with Uncertainty&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Patrick Gaffney, Bergen Software Services International, Norway&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in; text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;This talk describes 12 years of experience in developing simulation software for automotive companies. By building software from scratch, using boundary integral methods and other techniques, it has been possible to tailor the software to address specific issues that arise in painting processes applied to vehicles and to provide engineers with results for real-time optimization and manufacturing analysis. The talk will focus on one particular simulator for predicting electrocoat deposition on a vehicle and will address the topics of verification, validation, and uncertainty quantification as they relate to the development and use of the simulator in operational situations. The general theme throughout the talk is the author&#39;s belief of an almost total disconnection between engineers and the requirements of computational scientists. This belief is quite scary, and was certainly unexpected when starting the work 12 years ago. However, through several examples, the talk demonstrates the problems in attempting to extract from engineers the high quality input required to produce accurate simulation results. The title provides the focus and the talk describes how living under the shadow of uncertainty has made us more innovative and more resourceful in solving problems that we never really expected to encounter when we started on this journey in 1999.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Helton.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Uncertainty and Sensitivity Analysis: From Regulatory Requirements to Conceptual Structure and Computational Implementation&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Jon Helton, Sandia National Laboratories, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;The importance of an appropriate treatment of uncertainty in an analysis of a complex system is now almost universally recognized. As consequence, requirements for complex systems (e.g., nuclear power plants, radioactive waste disposal facilities, nuclear weapons) now typically call for some form of uncertainty analysis. However, these requirements are usually expressed at a high level and lack the detail needed to unambiguously define the intent, structure and outcomes of an analysis that provides a meaningful representation of the effects and implications of uncertainty. Consequently, it is necessary for the individuals performing an analysis to show compliance with a set of requirements to define a conceptual structure for the analysis that (i) is consistent with the intent of the requirements and (ii) also provides the basis for a meaningful uncertainty and sensitivity analysis. In many, if not most analysis situations, a core consideration is maintaining an appropriate distinction between aleatory uncertainty (i.e., inherent randomness in possible future behaviors of the system under study) and epistemic uncertainty (i.e., lack of knowledge with respect to the appropriate values to use for quantities that have fixed but poorly known values in the context of the particular study being performed). Conceptually, this leads to an analysis involving three basic entities: a probability space (&lt;/span&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;A&lt;/span&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;,&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;S&lt;/span&gt;&lt;sub&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;A&lt;/span&gt;&lt;/sub&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;, p&lt;/span&gt;&lt;sub&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;A&lt;/span&gt;&lt;/sub&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;) characterizing aleatory uncertainty, a probability space (&lt;/span&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;E&lt;/span&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;,&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;S&lt;/span&gt;&lt;sub&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;E&lt;/span&gt;&lt;/sub&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;, p&lt;/span&gt;&lt;sub&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;E&lt;/span&gt;&lt;/sub&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;) characterizing epistemic uncertainty, and a model that predicts system behavior (i.e., a function&amp;nbsp;&lt;i&gt;f&lt;/i&gt;(&lt;i&gt;t&lt;/i&gt;|&lt;b&gt;a&lt;/b&gt;,&lt;b&gt;e&lt;/b&gt;) that defines system behavior at time&amp;nbsp;&lt;i&gt;t&lt;/i&gt;&amp;nbsp;conditional on elements&amp;nbsp;&lt;b&gt;a&lt;/b&gt;&amp;nbsp;and&amp;nbsp;&lt;b&gt;e&lt;/b&gt;&amp;nbsp;of the sample spaces&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;A&lt;/span&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;&amp;nbsp;and&amp;nbsp;&lt;/span&gt;&lt;span style=&quot;font-family: Mathematica1; font-size: 9pt; line-height: 13px;&quot;&gt;E&lt;/span&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;&amp;nbsp;for aleatory and epistemic uncertainty). In turn, this conceptual structure leads to an analysis in which (i) uncertainty analysis results are defined by integrals involving the function&amp;nbsp;&lt;i&gt;f&lt;/i&gt;(&lt;i&gt;t&lt;/i&gt;|&lt;b&gt;a&lt;/b&gt;,&lt;b&gt;e&lt;/b&gt;) and the two indicated probability spaces and (ii) sensitivity analysis results are defined by the relationships between epistemically uncertain analysis inputs (i.e., elements&amp;nbsp;&lt;b&gt;e&lt;/b&gt;&lt;i&gt;&lt;sub&gt;j&lt;/sub&gt;&lt;/i&gt;&amp;nbsp;of&amp;nbsp;&lt;b&gt;e&lt;/b&gt;) and analysis results defined by the function&amp;nbsp;&lt;i&gt;f&lt;/i&gt;(&lt;i&gt;t&lt;/i&gt;|&lt;b&gt;a&lt;/b&gt;,&lt;b&gt;e&lt;/b&gt;) and also by various integrals of this function. Computationally, this leads to an analysis in which (i) high-dimensional integrals must be evaluated to obtain uncertainty analysis results and (ii) mappings between high-dimensional spaces must be generated and explored to obtain sensitivity analysis results. The preceding ideas and concepts are illustrated with an analysis carried out in support of a license application for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Interpreting Regional Climate Predictions&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Doug Nychka, National Center for Atmospheric Research, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;As attention shifts from broad global summaries of climate change to more specific regional impacts there is a need for statistics to quantify the uncertainty in regional projections. This talk will provide an overview on interpreting regional climate experiments (physically based simulations based on coupled global and regional climate models) using statistical methods to manage the discrepancies among models, their internal variability, regridding errors, model biases and other factors. The extensive simulations being produced in the North American Regional Climate Change and Assessment Program (NARCCAP) provide a context for our statistical approaches. An emerging principle is adapting analysis of variance decompositions to test for equality of mean fields, to quantify the variability due to different components in a numerical experiment and to identify the departures from observed climate fields.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Oberkampf.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Weaknesses and Failures of Risk Assessment&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;William Oberkampf, Sandia National Laboratories, US (retired)&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Within the technical community, it is instinctive to conduct uncertainty quantification analyses and risk assessments for use in risk-informed decision making. The value and reliability of some formal assessments, however, have been sharply criticized not only by well-known scientists, but also by the public after high-visibility failures such as the loss of two Space Shuttles, major damage at the Three-Mile Island nuclear power plant, and the disaster at the Fukushima plant. The realities of these failures, and many others, belie the predicted probabilities of failure for these systems and the credibility of risk assessments in general. The uncertainty quantification and risk assessment communities can attempt to defend and make excuses for notoriously poor (or misused) analyses, or we can learn how to improve technical aspects of analyses and to develop procedures to help guard against fraudulent analyses. This talk will take the latter route by first examining divergent goals of risk analyses; neglected sources of uncertainty in modeling the hazards or initiating events, external influences on the system of interest, and the system itself; and the importance of mathematical representations of uncertainties and their dependencies. We will also argue that risk analyses are not simply mathematical activities, but they are also human endeavors that are susceptible to a wide range of human weaknesses. As a result, we discuss how analyses can be distorted and/or biased by analysts and sponsors of the analysis, and how results of analyses can be miscommunicated and misinterpreted, either unintentionally or deliberately.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Panel Discussion:&lt;/span&gt;&lt;/b&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;&amp;nbsp;UQ and Decision Making&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Mac Hyman, Tulane University, US (Moderator)&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Sandy Landsberg, Department of Energy, U&lt;/span&gt;&lt;/i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;S [&lt;/span&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Landsberg.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Opening Remarks&lt;/span&gt;&lt;/a&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;]&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Larry Winter, University of Arizona, US&amp;nbsp;&lt;/span&gt;&lt;/i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;[&lt;/span&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Winter.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Opening Remarks&lt;/span&gt;&lt;/a&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;]&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Charles Romine, NIST,&amp;nbsp;&lt;/span&gt;&lt;/i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;US [&lt;/span&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Romine.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Opening Remarks&lt;/span&gt;&lt;/a&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;]&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div align=&quot;center&quot; class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in; text-align: center;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;________________________________________________________________________________________________&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 24pt;&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 14pt;&quot;&gt;Part&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 14pt;&quot;&gt;&amp;nbsp;II: Uncertainty Quantification Theory&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Keynote Address&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Goldstein.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Bayesian Analysis for Complex Physical Systems Modeled by Computer Simulators: Current Status and Future Challenges&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Michael Goldstein, Durham University, UK&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Most large and complex physical systems are studied by mathematical models, implemented as high dimensional computer simulators. While all such cases differ in physical description, each analysis of a physical system based on a computer simulator involves the same underlying sources of uncertainty. These are: condition uncertainty (unknown initial conditions, boundary conditions and forcing functions), parametric uncertainty (as the appropriate choices for the model parameters are not known), functional uncertainty (as models are typically expensive to evaluate for any choice of parameters), structural uncertainty (as the model is different from the physical system), measurement uncertainty (in the data used to calibrate the model), stochastic uncertainty (arising from intrinsic randomness in the system equations), solution uncertainty (as solutions to the system equations can only be assessed approximately) and multi-model uncertainty (as there often is a family of models, at different levels of resolution, possibly with different representations of the underlying physics). There is a growing field of study which aims to quantify and synthesize all of the uncertainties involved in relating models to physical systems, within the framework of Bayesian statistics, and to use the resultant uncertainty specification to address problems of forecasting and decision making based on the application of these methods. Examples of areas in which such methodology is being applied include asset management for oil reservoirs, galaxy modeling, and rapid climate change. In this talk, we shall give an overview of the current status and future challenges in this emerging methodology, illustrating with examples of current areas of application.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Hatton.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Scientific Computation and the Scientific Method: A Tentative Road Map for Convergence&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Les Hatton, Kingston University, UK&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;For the last couple of centuries, the scientific method whereby we have followed Karl Popper&#39;s model of endlessly seeking to refute new and existing discoveries, forcing them to submit to repeatability and detailed peer review of both the theory and the experimental methods employed to flush out insecure conclusions, has served us extremely well. Much progress has been made. For the last 40 years or so however, there has been an increasing reliance on computation in the pursuit of scientific discovery. Computation is an entirely different animal. Its repeatability has proved unreliable, we have been unable to eliminate defect or even to quantify its effects, and there has been a rash of unconstrained creativity making it very difficult to make systematic progress to align it with the philosophy of the scientific method. At the same time, computation has become the dominant partner in many scientific areas. This paper will address a number of issues. Through a series of very large experiments involving millions of lines of code in several languages along with an underpinning theory, it will put forward the viewpoint that defect is both inevitable and essentially a statistical phenomenon. In other words looking for purely technical computational solutions is unlikely to help much - there very likely is no silver bullet. Instead we must urgently promote the viewpoint that for any results which depend on computation, the computational method employed must be subject to the same scrutiny as has served us well in the years preceding computation. Baldly, that means that if the program source, the method of making and running the system, the test results and the data are not openly available, then it is not science. Even then we face an enormous challenge when digitally lubricated media can distort evidence to undermine the strongest of scientific cases.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Eldred.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Overview of Uncertainty Quantification Algorithm R&amp;amp;D in the DAKOTA Project&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Michael Eldred, Sandia National Laboratories, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Uncertainty quantification (UQ) is a key enabling technology for assessing the predictive accuracy of computational models and for enabling risk-informed decision making. This presentation will provide an overview of algorithms for UQ, including sampling methods such as Latin Hypercube sampling, local and global reliability methods such as AMV2+ and EGRA, stochastic expansion methods including polynomial chaos and stochastic collocation, and epistemic methods such as interval-valued probability, second-order probability, and evidence theory. Strengths and weaknesses of these different algorithms will be summarized and example applications will be described. Time permitting, I will also provide a short overview of DAKOTA, an open source software toolkit that provides a delivery vehicle for much of the UQ research at the DOE defense laboratories.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;A Compressive Sampling Approach to Uncertainty Propagation&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Alireza Doostan, University of Colorado&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Uncertainty quantification (UQ) is an inevitable part of any predictive modeling practice. Intrinsic variabilities and lack of knowledge about system parameters or governing physical models often considerably affect quantities of interest and decision-making processes. Efficient representation and propagation of such uncertainties through complex PDE systems are subjects of growing interests, especially for situations where a large number of uncertain sources are present. One major difficulty in UQ of such systems is the development of non-intrusive approaches in which deterministic codes are used in a black box fashion, and at the same time, solution structures are exploited to reduce the number of deterministic runs. Here we extend ideas from compressive sampling techniques to approximate solutions of PDEs with stochastic inputs using direct, i.e., non-adapted, sampling of solutions. This sampling can be done by using any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity of solutions and a concentration of measure phenomenon on the empirical correlation between samples. We show that the method is well suited for PDEs with high-dimensional stochastic inputs. This is a joint work with Prof. Houman Owhadi from California Institute of Technology.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Keynote Address&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Ferson.ppt&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Verified Computation with Probability Distributions and Uncertain Numbers&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Scott Ferson, Applied Biomathematics, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Interval analysis is often offered as the method for verified computation, but the pessimism in the old saw that &quot;interval analysis is the mathematics of the future, and always will be&quot; is perhaps justified by the impracticality of interval bounding as an approach to projecting uncertainty in real-world problems. Intervals cannot account for dependence among variables, so propagations commonly explode to triviality. Likewise, the dream of a workable &#39;probabilistic arithmetic&#39;, which has been imagined by many people, seems similarly unachievable. Even in sophisticated applications such as nuclear power plant risk analyses, whenever probability theory has been used to make calculations, analysts have routinely assumed (i) probabilities and probability distributions can be precisely specified, (ii) most or all variables are independent of one another, and (iii) model structure is known without error. For the most part, these assumptions have been made for the sake of mathematical convenience, rather than with any empirical justification. And, until recently, these or essentially similar assumptions were pretty much necessary in order to get any answer at all. New methods now allow us to compute bounds on estimates of probabilities and probability distributions that are guaranteed to be correct even when one or more of the assumptions is relaxed or removed. In many cases, the results obtained are the best possible bounds, which means that tightening them would require additional empirical information. This talk will present an overview of probability bounds analysis, as a computationally practical implementation of imprecise probabilities that combines ideas from both interval analysis and probability theory to sidestep the limitations of each.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Matthies.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Parametric Uncertainty Computations with Tensor Product Representations&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Hermann Matthies, Technische Universität Braunschweig, Germany&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Parametric versions of state equations of some complex system - the uncertainty quantification problem with the parameter as a random quantity is a special case of this general class - lead via association to a linear operator to analogues of covariance, its spectral decomposition, and the associated Karhunen-Loève expansion. This results in a generalized tensor representation. The parameter in question may be a number, a tuple of numbers - a finite dimensional vector or function, a stochastic process, or a random tensor field. Examples of stochastic problems, dynamic problems, and similar will be given to explain the concept. If possible, the tensor factorization may be cascaded, leading to tensors of higher degree. In numerical approximations this cascading tensor decomposition may be repeated on the discrete level, leading to very sparse representations of the high dimensional quantities involved in such parametric problems. This is achieved by choosing low-rank approximations, in effect an information compression. These representations allow also for very efficient computation. Updating of uncertainty for new information is an important part of uncertainty quantification. Formulated in terms or random variables instead of measures, the Bayesian update is a projection and allows the use of the tensor factorizations also in this case. This will be demonstrated on some examples.&lt;/span&gt;&lt;/div&gt;&lt;div align=&quot;center&quot; class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in; text-align: center;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;________________________________________________________________________________________________&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 24pt;&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 14pt;&quot;&gt;Part III: Uncertainty Quantification Tools&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Keynote Address&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Kahan.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Desperately Needed Remedies for the Undebugability of Large-scale Floating-point Computations in Science and Engineering&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;William Kahan, University of California at Berkeley, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;If suspicions about the accuracy of a computed result arise, how long does it take to either allay or justify them? Often diagnosis has taken longer than the computing platform&#39;s service life. Software tools to speed up diagnosis by at least an order of magnitude could be provided but almost no scientists and engineers know to ask for them, though almost all these tools have existed, albeit not all together in the same place at the same time. These tools would cope with vulnerabilities peculiar to Floating-Point, namely roundoff and arithmetic exceptions. But who would pay to develop the suite of these tools? Nobody, unless he suspects that the incidence of misleadingly anomalous floating-point results rather exceeds what is generally believed. And there is ample evidence to suspect that.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; page-break-after: avoid; text-indent: -0.5in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Accurate Prediction of Complex Computer Codes via Adaptive Designs&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;William Welch, University of British Columbia, Canada&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;There are many useful classes of design for an initial computer experiment: Latin hypercubes, orthogonal array Latin hypercubes, maximin-distance designs, etc. Often, the intitial experiment has about n = 10d runs, where d is the dimensionality of the input space (Loeppky, Sacks, and Welch, &quot;Choosing the Sample Size of a Computer Experiment,&quot; Technometrics 2009). Once the computer model has been run according to the design, a first step is usually to build a computationally inexpensive statistical surrogate for the computer model, often via a Gaussian Process / Random Function statistical model. But what if the analysis of the data from this initial design provides poor prediction accuracy? Poor accuracy implies the underlying input-output function is complex in some sense. If the complexity is restricted to a few of the inputs or to local subregions of the parameter space, there may be opportunity to use the initial analysis to guide further experimentation. Subsequent runs of the code should take account of what has been learned. Similarly, analysis should be adaptive. This talk will demonstrate strategies for experimenting sequentially. Difficult functions, including real computer codes, will be used to illustrate. The advantages will be assessed in terms of empirical prediction accuracy and theoretical measures.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; page-break-after: avoid; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Challenor.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Using Emulators to Estimate Uncertainty in Complex Models&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Peter Challenor, National Oceanography Centre, UK&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;The Managing Uncertainty in Complex Models project has been developing methods for estimating uncertainty in complex models using emulators. Emulators are statistical descriptions of our beliefs about the models (or simulators). They can also be thought of as interpolators of simulator outputs between previous runs. Because they are quick to run, emulators can be used to carry out calculations that would otherwise require large numbers of simulator runs, for example Monte Carlo uncertainty calculations. Both Gaussian and Bayes Linear emulators will be explained and examples given. One of the outputs of the MUCM project is the MUCM toolkit, an on-line &quot;recipe book&quot; for emulator based methods. Using the toolkit as our basis we will illustrate the breadth of applications that can be addressed by emulator methodology and detail some of the methodology. We will cover sensitivity and uncertainty analysis and describe in less detail other aspects such as how emulators can also be used to calibrate complex computer simulators and how they can be modified for use with stochastic simulators.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 49.7pt; margin-right: 0in; margin-top: 12pt; text-indent: -49.7pt;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Smith.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Measuring Uncertainty in Scientific Computations Using the Test Harness&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Brian Smith, Numerica 21 Inc., US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;The test harness TH is a tool developed by Numerica 21 to facilitate the testing and evaluation of scientific software during the development and maintenance phases of such software. This paper describes how the tool can be used to measure uncertainty in scientific computations. It confirms that the actual behavior of the code when subjected to changes, typically small, in the code input data reflects formal analysis of the problem&#39;s sensitivity to its input. Although motivated by studying small changes in the input data, the test harness can measure the impact of any changes, including those that go beyond the formal analysis.&lt;/span&gt;&lt;/div&gt;&lt;div align=&quot;center&quot; class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in; text-align: center;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;________________________________________________________________________________________________&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 24pt;&quot;&gt;&lt;b&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 14pt;&quot;&gt;Part IV: Uncertainty Quantification Practice&lt;/span&gt;&lt;/i&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Keynote Address&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Cox.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Numerical Aspects of Evaluating Uncertainty in Measurement&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Maurice Cox, National Physical Laboratory, UK&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;We examine aspects of quantifying the numerical accuracy in results from a measurement uncertainty computation in terms of the inputs to that computation. The primary output from such a computation is often an approximation to the PDF (probability density function) for the measurand (the quantity intended to be measured), which may be a scalar or vector quantity. From this PDF all results of interest can be derived. The following aspects are considered:&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormalCxSpMiddle&quot; style=&quot;margin-left: 1in; text-indent: -0.25in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;1.&lt;span style=&quot;font: normal normal normal 7pt/normal &#39;Times New Roman&#39;;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;The numerical quality of the PDF obtained by using Monte Carlo or Monte Carlo Markov Chain methods in terms of (a) the random number generators used, (b) the (stochastic) convergence rate, and its possible acceleration, and (c) adaptive schemes to achieve a (nominal) prescribed accuracy.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormalCxSpMiddle&quot; style=&quot;margin-left: 1in; text-indent: -0.25in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;2.&lt;span style=&quot;font: normal normal normal 7pt/normal &#39;Times New Roman&#39;;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;The production of a smooth and possibly compact representation of the approximate PDF so obtained, for purposes such as when the PDF is used as input to a further uncertainty evaluation, or when visualization is required.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormalCxSpMiddle&quot; style=&quot;margin-left: 1in; text-indent: -0.25in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;3.&lt;span style=&quot;font: normal normal normal 7pt/normal &#39;Times New Roman&#39;;&quot;&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;The sensitivities of the numerical results with respect to the inputs to the computation.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt;&quot;&gt;We speculate as to future requirements in the area and how they might be addressed.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Possolo.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Model-based Interpolation, Approximation, and Prediction&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Antonio Possolo, National Institute of Standards and Technology, US&lt;/span&gt;&lt;/i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 1in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Model-based interpolation, approximation, and prediction are contingent on the choice of model: since multiple alternative models typically can reasonably be entertained for each of these tasks, and the results are correspondingly varied, this often is a major source of uncertainty. Several statistical methods are illustrated that can be used to assess this uncertainty component: when interpolating concentrations of greenhouse gases over Indianapolis, predicting the viral load in a patient infected with influenza A, and approximating the solution of the kinetic equations that model the progression of the infection.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Glimm.pptx&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Uncertainty Quantification for Turbulent Reacting Flows&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;James Glimm, State University of New York at Stony Brook, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Uncertainty Quantification (UQ) for fluid mixing depends on the lengths scales for observation: macro, meso and micro, each with its own UQ requirements. New results are presented for each. For the micro observables, recent theories argue that convergence of numerical simulations in the Large Eddy Simulation (LES) should be governed by probability distribution functions (pdfs, or in the present context, Young measures) which satisfy the Euler equation. From a single deterministic simulation in the LES, or inertial regime, we extract a pdf by binning results from a space time neighborhood of the convergence point. The binned state values constitute a discrete set of solution values which define an approximate pdf. Such a step coarsens the resolution, but not more than standard LES simulation methods, which typically employ an extended spatial filter in the definition of the filtered equations and associated subgrid scale (SGS) terms. The convergence of the resulting pdfs is assessed by standard function space metrics applied to the associated probability distribution function, i.e. the indefinite integral of the pdf. Such a metric is needed to reduce noise inherent in the pdf itself. V&amp;amp;V/UQ results for mixing and reacting flows are presented to support this point of view.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 12pt;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Visualization of Error and Uncertainty&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Chris Johnson, University of Utah, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;As former English Statesmen and Nobel Laureate (Literature), Winston Churchill said, &quot;True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information.&quot; Churchill is echoed by Nobel Prize winning Physicist Richard Feynman, &quot;What is not surrounded by uncertainty cannot be the truth.&quot; Yet, with few exceptions, visualization research has ignored the visual representation of errors and uncertainty for three-dimensional (and higher) visualizations. In this presentation, I will give an overview of the current state-of-the-art in uncertainty visualization and discuss future challenges.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Sandu.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Uncertainty Reduction in Atmospheric Composition Models by Chemical Data Assimilation&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Adrian&lt;/span&gt;&lt;/i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;&amp;nbsp;&lt;i&gt;Sandu, Virginia Tech, US&lt;/i&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Data assimilation reduces the uncertainty with which the state of a physical system is known by combining imperfect model results with sparse and noisy observations of reality. Chemical data assimilation refers to the use of measurements of trace gases and particulates to improve our understanding of the atmospheric composition. Two families of methods are widely used in data assimilation: the four dimensional variational (4D-Var) approach, and the ensemble Kalman filter (EnKF) approach. In the four dimensional variational (4D-Var) framework data assimilation is formulated as an optimization problem, which is solved using gradient based methods to obtain maximum likelihood estimates of the uncertain state and parameters. A central issue in 4D-Var data assimilation is the construction of the adjoint model. Kalman filters are rooted in statistical estimation theory, and seek to obtain moments of the posterior distribution that quantifies the reduced uncertainty after measurements have been considered. A central issue in Kalman filter data assimilation is to manage the size of covariance matrices by employing various computationally feasible approximations. In this talk we review computational aspects and tools that are important for chemical data assimilation. They include the construction, analysis, and efficient implementation of discrete adjoint models in 4D-Var assimilation, optimization aspects, and the construction of background covariance matrices. State-of-the-art solvers for large scale PDEs adaptively refine the time step and the mesh in order to control the numerical errors. We discuss newly developed algorithms for variational data assimilation with adaptive models. Particular aspects of the use of ensemble Kalman filters in chemical data assimilation are highlighted. New hybrid data assimilation ideas that combine the relative strengths the variational and ensemble approaches are reviewed. Examples of chemical data assimilation studies with real data and widely used chemical transport models are given.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; page-break-after: avoid; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Heroux.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Emerging Architectures and UQ: Implications and Opportunities&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; page-break-after: avoid; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Michael Heroux, Sandia National Laboratory, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; page-break-after: avoid; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Computer architecture is changing dramatically. Most noticeable is the introduction of multicore and GPU (collectively, manycore) processors. These manycore architectures promise the availability of a terascale laptop, petascale deskside and exascale compute center in the next few years. At the same time, manycore nodes will forces a universal refactoring of code in order to realize this performance potential. Furthermore, the sheer number of components in very high-end systems increases the chance that user applications will experience frequent system faults in the form of soft errors. In this presentation we give an overview of architecture trends and their potential impact on scientific computing in general and uncertainty quantification (UQ) computations specifically. We will also discuss growing opportunities for UQ that are enabled by increasing computing capabilities, and new opportunities to help address the anticipated increase in soft errors that must be addressed at the application level.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Muhanna.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Interval Based Finite Elements for Uncertainty Quantification in Engineering Mechanics&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Rafi Muhanna, Georgia Tech Savannah, US&lt;/span&gt;&lt;/i&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;Latest scientific and engineering advances have started to recognize the need for defining multiple types of uncertainty. The behavior of a mathematical model of any system is determined by the values of the model&#39;s parameters. These parameters, in turn, are determined by available information which may range from scarce to comprehensive. When data is scarce, analysts fall back to deterministic analysis. On the other hand, when more data is available but insufficient to distinguish between candidate probability functions, analysts supplement the available statistical data by judgmental information. In such a case, we find ourselves in the extreme either/or situation: a deterministic setting which does not reflect parameter variability, or a full probabilistic analysis conditional on the validity of the probability models describing the uncertainties. The above discussion illustrates the challenge that engineering analysis and design is facing in how to circumvent situations that do not reflect the actual state of knowledge of considered systems and are based on unjustified assumptions. Probability Bounds (PB) methods offer a resolution to this problem as they are sufficiently flexible to quantify uncertainty absent assumptions in the form of the probability density functions (PDF) of system parameters, yet they can incorporate this structure into the analysis when available. Such approach will ensure that the actual state of knowledge on the system parameters is correctly reflected in the analysis and design; hence, design reliability and robustness are achieved. Probability bounds is built on interval analysis as its foundation. This talk will address the problem of overestimation of enclosures for target and derived quantities, a critical challenge in the formulation of Interval Finite Element Methods (IFEM). A new formulation for Interval Finite Element Methods will be introduced where both primary and derived quantities of interest are included in the original uncertain system as primary variables.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 12pt; page-break-after: avoid; text-indent: -0.5in;&quot;&gt;&lt;a href=&quot;http://math.nist.gov/IFIP-UQSC-2011/slides/Enright.pdf&quot; style=&quot;color: blue; text-decoration: underline;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Reducing the Uncertainty When Approximating the Solution of ODEs&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;Wayne Enright, University of Toronto, Canada&lt;/span&gt;&lt;/i&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 12pt;&quot;&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;span style=&quot;font-family: Cambria, serif; font-size: 9pt; line-height: 13px;&quot;&gt;In the numerical solution of ODEs, it is now possible to develop efficient techniques that compute approximate solutions that are more convenient to interpret and understand when used by practitioners who are interested in accurate and reliable simulations of their mathematical models. We have developed a class of ODE methods and associated software tools that will deliver a piecewise polynomial as the approximate solution and facilitate the investigation of various aspects of the problem that are often of as much interest as the approximate solution itself. These methods are designed so that the resulting piecewise polynomial will satisfy a perturbed ODE with an associated defect (or residual) that is&amp;nbsp;&lt;i&gt;reliably&lt;/i&gt;&amp;nbsp;controlled. We will introduce measures that can be used to quantify the reliability of an approximate solution and how one can implement methods that, at some extra cost, can produce very reliable approximate solutions. We show how the ODE methods we have developed can be the basis for implementing effective tools for visualizing an approximate solution, and for performing key tasks such as sensitivity analysis, global error estimation and investigation of problems which are parameter-dependent. Software implementing this approach will be described for systems of IVPs, BVPs, DDEs, and VIEs. Some numerical results will be presented for mathematical models arising in application areas such as computational medicine or the modeling of predator-prey systems in ecology.&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: normal; margin-bottom: 0.0001pt; margin-left: 0.5in; margin-right: 0in; margin-top: 0in; text-indent: -0.5in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot; style=&quot;font-family: Calibri, sans-serif; font-size: 11pt; line-height: 17px; margin-bottom: 10pt; margin-left: 0in; margin-right: 0in; margin-top: 0in;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/3483954260941936954/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/3483954260941936954' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/3483954260941936954'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/3483954260941936954'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2011/10/ifip-working-conference-on-uncertainty.html' title='IFIP Working Conference on Uncertainty Quantification in Scientific Computing'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-2144130310634721185</id><published>2010-12-15T10:00:00.001+01:00</published><updated>2010-12-15T10:01:04.225+01:00</updated><title type='text'>On the Difficulty of Price Modeling</title><content type='html'>I was recently looking for a clean example of a service or an item that could clearly show the difficulty of the pricing of said service or item. I just found one on &lt;a href=&quot;http://danariely.com/&quot;&gt;Dan Ariely&lt;/a&gt;&#39;s blog: &lt;a href=&quot;http://danariely.com/2010/12/15/locksmiths/&quot;&gt;Locksmiths&lt;/a&gt;. Here&lt;a href=&quot;http://www.youtube.com/watch?v=x8baBvOk0ng&amp;amp;feature=player_embedded&quot;&gt; is the video&lt;/a&gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;center&gt;&lt;br /&gt;
&lt;object height=&quot;385&quot; width=&quot;440&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://www.youtube.com/v/x8baBvOk0ng?fs=1&amp;amp;hl=en_US&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;/param&gt;&lt;embed src=&quot;http://www.youtube.com/v/x8baBvOk0ng?fs=1&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot; allowscriptaccess=&quot;always&quot; allowfullscreen=&quot;true&quot; width=&quot;440&quot; height=&quot;385&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;/center&gt;&lt;br /&gt;
&lt;br /&gt;
Do you have other examples ?</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/2144130310634721185/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/2144130310634721185' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/2144130310634721185'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/2144130310634721185'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/12/on-difficulty-of-price-modeling.html' title='On the Difficulty of Price Modeling'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-8334220683558011113</id><published>2010-12-07T00:01:00.002+01:00</published><updated>2010-12-07T10:51:52.424+01:00</updated><title type='text'>Human Behavior Modeling Failures</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;This blog entry entitled &lt;a href=&quot;http://freakonometrics.blog.free.fr/index.php?post/2010/11/29/Millenium-bridge%2C-and-risk-management&quot;&gt;Millenium bridge, endogeneity and risk management&lt;/a&gt; features two examples of faulty modeling in bridges and value-at-risk models (VaR) that take their roots in their not taking into account human behavior. One but wonders if people were dealing with &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2010/11/modeler-known-unknowns-and-unknown.html&quot;&gt;unknown unknowns&lt;/a&gt; when the initial modeling was performed. In a different direction, when one deals with models and human behavior there is always the possibility for subgroups to game the system and make the intended modeling worthless. Here is an example related to &lt;a href=&quot;http://simplificationadministrative.org/2010/11/28/la-bibliometrie-deja-cassee/#english&quot;&gt;ranking academics and researchers.&lt;/a&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/8334220683558011113/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/8334220683558011113' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8334220683558011113'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8334220683558011113'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/12/human-behavior-modeling-failures.html' title='Human Behavior Modeling Failures'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-7259994828770781150</id><published>2010-12-06T14:52:00.000+01:00</published><updated>2010-12-06T14:52:21.019+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="RMMExample"/><title type='text'>RMM Example #3: Nuclear Proliferation and Terrorism Risk Assessment.</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;In the U.S, the nuclear fuel cycle is termed &#39;one through&#39; as nuclear fuel passes in nuclear power plant only once before being discarded. The debate on whether the country should be reprocessing some of these materials has been an ongoing discussion as early as the 1950&#39;s. The problematic is extremely complex and has many stakeholders at the table. A subset of the discussions include the fact that if the U.S. were to ever perform some reprocessing in their civilian fuel cycle, it would provide some grounds for other countries to do the same. For technical reasons, reprocessing is considered to be a good tool for proliferation because as soon as you allow for your nuclear fuel to be reprocessed, you&amp;nbsp; also open the door to the ability to extract material for &quot;other purposes&quot;. Most countries are signatories of the Non Proliferation Treaty (NPT) and as you all know the &lt;a href=&quot;http://:www.iaea.org/&quot;&gt;IAEA&lt;/a&gt; is in charge of &lt;a href=&quot;http://:www.iaea.org/Publications/Magazines/Bulletin/Bull104/10403500308.pdf&quot;&gt;verifying compliance&lt;/a&gt; (all countries that are signatories are subject to yearly visits by IAEA staff) And so, a major technical effort in any type of Research and Development at the U.S. Department of Energy (and other countries that comply with the NPT) revolves around bringing technical solutions to some proliferation issues (as other proliferation issues are political in nature). As part of the recent &lt;a href=&quot;http://events.energetics.com/NEETWorkshop2010/&quot;&gt;DOE Nuclear Energy Enabling Technologies Program Workshop&lt;/a&gt;, there was an interesting subsection of the meeting dedicated to Proliferation and Terrorism Risk Assessment. I am not a specialist of this particular area, so I will just feature the material presented there for illustration purposes as they all represents some issues commonly found in &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/search/label/RMMExample&quot;&gt;difficult to solve RMM examples&lt;/a&gt;. From the &lt;a href=&quot;http://events.energetics.com/NEETWorkshop2010/pdfs/Proliferation.pdf&quot;&gt;presentation&lt;/a&gt; introducing the assessment here were the Goals and Objectives of this sub-meeting:&lt;/div&gt;&lt;blockquote&gt;Solicit views of a broad cross‐section of stakeholders on the following questions:&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Would you favor an expanded R&amp;amp;D effort on proliferation and terrorism risk assessment? Why or why not?&lt;/li&gt;
&lt;li&gt;In what ways have current methodologies been useful, how might R&amp;amp;D make them more effective?&lt;/li&gt;
&lt;li&gt;If an expanded R&amp;amp;D program was initiated, what are promising areas for R&amp;amp;D, areas less worthwhile, and what mix of topics would best balance an expanded R&amp;amp;D portfolio?&lt;/li&gt;
&lt;li&gt;If an expanded R&amp;amp;D program was initiated, what cautions and recommendations should DOE‐NE consider as the program is planned and implemented?&lt;/li&gt;
&lt;/ul&gt;Panel presentations to stimulate the discussion will address:&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;Existing state‐of‐the‐art tools and methodologies for proliferation and terrorism risk assessment.&lt;/li&gt;
&lt;li&gt;The potential impact of improved tools and methodologies as well as factors that should be carefully considered in their use and any further development efforts.&lt;/li&gt;
&lt;li&gt;Identification of the challenges, areas for improvement, and gaps associated with broader utilization and acceptance of proliferation and terrorism risk assessment tools and methodologies.&lt;/li&gt;
&lt;li&gt;Identification of promising opportunities for R&amp;amp;D. Broad discussion/input is essential, active participation of all session attendees will: Provide important perspectives on proliferation and terrorism risk assessment R&amp;amp;D and ultimately strengthen capabilities for supporting NE’s development of new reactor and fuel cycle technologies/concepts while minimizing proliferation and terrorism risks.&lt;/li&gt;
&lt;/ul&gt;&lt;/blockquote&gt;&lt;br /&gt;
Why talk about this subject on RMM ? Well I was struck by the type of questions being asked to technical people as they looked like typical questions one would ask in the context of an RMM example.&lt;br /&gt;
&lt;br /&gt;
In Robert Bari&#39;s presentation one can read: &lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOKbifj-4bqc9i3ITkhmlN49VIWTPP3x93nshOqegNKOlpY8lT7cvFb_2X8510u4bbwVSmxZ37Ua0dvSVfS1U3fO8t5a1-nSWcTdCC2IcTdkK3p1-6YhW0L367O_kPIjAjKc3PZVMuW60/s1600/proliferation1.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;298&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOKbifj-4bqc9i3ITkhmlN49VIWTPP3x93nshOqegNKOlpY8lT7cvFb_2X8510u4bbwVSmxZ37Ua0dvSVfS1U3fO8t5a1-nSWcTdCC2IcTdkK3p1-6YhW0L367O_kPIjAjKc3PZVMuW60/s400/proliferation1.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&quot;Conveying Results: In particular, what we know about what we do not know&quot; sounded a little too much like the categories discussed in &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2010/11/modeler-known-unknowns-and-unknown.html&quot;&gt;The Modeler&#39;s Known Unknowns and  Unknown Knowns&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;In Proliferation Resistance and Proliferation Risk Analysis: Thoughts on a Path Forward by William S. Charlton, one can read:&lt;/div&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7FIa_49v6RqqbbwgOw2BiPRMt1AOSZogPXNZhsU6rB51WfWs0k_vp-M7O-Y4DdPXU7N7lfAF9_u1b0L-0e1tWHHIqA57ZjiQ7FA_shr8n5wKKsUB9TAG6SubQgztz7z4kKyjaeLCIS0Q/s1600/proliferation3.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;275&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj7FIa_49v6RqqbbwgOw2BiPRMt1AOSZogPXNZhsU6rB51WfWs0k_vp-M7O-Y4DdPXU7N7lfAF9_u1b0L-0e1tWHHIqA57ZjiQ7FA_shr8n5wKKsUB9TAG6SubQgztz7z4kKyjaeLCIS0Q/s400/proliferation3.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;I wonder about the type of modeling that goes into estimating uncertainties. Finally, in Bill Burchill&#39;s slides, one can read on the proliferation pathways the following:&lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGFhhwgTUtAnahEfdMjYEZjpU6-ducVyVA35Mh81VM6Q9ErWBKL3nFqn0zJfgJHEgvR_IarAeu8KBsvAa0eE3i9KRgp5uCOc-qQMTszgPxLCpO99IheMAeupXyF-CaMeqoP52hVd4l5G0/s1600/proliferation2.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;261&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGFhhwgTUtAnahEfdMjYEZjpU6-ducVyVA35Mh81VM6Q9ErWBKL3nFqn0zJfgJHEgvR_IarAeu8KBsvAa0eE3i9KRgp5uCOc-qQMTszgPxLCpO99IheMAeupXyF-CaMeqoP52hVd4l5G0/s400/proliferation2.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/7259994828770781150/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/7259994828770781150' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/7259994828770781150'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/7259994828770781150'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/12/rmm-example-3-nuclear-proliferation-and.html' title='RMM Example #3: Nuclear Proliferation and Terrorism Risk Assessment.'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOKbifj-4bqc9i3ITkhmlN49VIWTPP3x93nshOqegNKOlpY8lT7cvFb_2X8510u4bbwVSmxZ37Ua0dvSVfS1U3fO8t5a1-nSWcTdCC2IcTdkK3p1-6YhW0L367O_kPIjAjKc3PZVMuW60/s72-c/proliferation1.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-4932897812355790400</id><published>2010-12-04T13:17:00.003+01:00</published><updated>2010-12-04T22:38:46.722+01:00</updated><title type='text'>Solution to the &quot;Selling from Novosibirsk&quot; business model riddle</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;Bernard Beauzamy&lt;/a&gt; (the owner of &lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;SCM SA&lt;/a&gt;) had set up a 500 euros prize for whoever could find a way for the &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2010/11/prize-for-modeling-business-with.html&quot;&gt;business model riddle featured on Tuesday&lt;/a&gt;.&amp;nbsp;&lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;Bernard&lt;/a&gt; tells me that nobody won the prize, here is the answer:&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;....The answers we received fall into two categories:&lt;br /&gt;
&lt;br /&gt;
Those who want to send 10 000 left-hand gloves, and then 10 000 right-hand gloves, and declare them of value zero.&lt;br /&gt;
&lt;br /&gt;
This answer does not make sense ! Do you think that the customs will be stupid enough not to observe that they see only left-hand gloves, and only later right-hand gloves ? They would confiscate both, and the sender would go to jail, for attempt to cheat the customs. Many years of jail !&lt;br /&gt;
&lt;br /&gt;
Those who want to create a branch of the company in the destination country, and claim that they would evade the customs this way.&lt;br /&gt;
&lt;br /&gt;
This answer does not make sense either ! It only reduces the profits, since one has to pay all the people in the local structure. And it changes nothing to the fact that customs tax the final selling price. If you have 10 intermediaries, you will have to give a salary to the ten, and the customer pays the same price, so the producer gets less money. In all circumstances, customs or not, the fewer intermediaries you have, the better you feel.&lt;br /&gt;
&lt;br /&gt;
The answer, as we expected, was not found by anyone, because all our readers, by education or taste, want to build mathematical models, and this is a situation where no model is possible. It defies imagination and logics, and contradicts all existing economical models.&lt;br /&gt;
&lt;br /&gt;
First of all, the solution looks impossible. If we sell each pair at its maximum price, that is 200 Rubles, the customs takes 160, we keep 40, and this is exactly equal to the production cost, so we have no benefit at all. It is even worse if we sell at a lower price.&lt;br /&gt;
&lt;br /&gt;
The solution is this : we have to impose fabrication defects to 5 000 pairs (both left and right gloves). After that, we export 5 000 pairs, of which the left one is normal and the right one is defective (for example a big scar across one finger). These gloves are declared as &quot;fabrication rejects&quot;, for a very small price, for instance 20 Rubles a pair. Note that selling and exporting &quot;fabrication rejects&quot; is quite ordinary and legal, and is common practice.&lt;br /&gt;
&lt;br /&gt;
Then, next month, we do the converse : we export 5 000 pairs, of which the left one is defective and the right one is normal. We put all gloves together, and we get 5 000 pairs of normal gloves, which we sell at the maximum price. The total cost is 400 000 Rubles (fabrication), plus 160 000 Rubles (customs). The sales bring 1 000 000 Rubles, so we have a benefit of 840 000 Rubles. We can of course sell the defective gloves, just to have some receipts for the customs.&lt;br /&gt;
&lt;br /&gt;
The ideal solution, but this is a remarkable industrial achievement is to program the fabrication machine so that it put defects on one pair out of two.&lt;br /&gt;
&lt;br /&gt;
We observe that the solution is perfectly legal. Fabrication defects exist and are sold worlwide, for a low price. Each pair is perfectly declared at is correct value.&lt;br /&gt;
&lt;br /&gt;
We said earlier that mathematical modeling is impossible. In fact, this example shows that all Nobel prizes given to economists since 1969 (year of creation of the prize) should be withdrawn, because they have no value at all.&lt;br /&gt;
&lt;br /&gt;
Precisely, we see here that the notion of price is not mathematically well-defined. We cannot talk about the price of a glove, even not of a pair of gloves. We see that the price is not a continuous function, nor an increasing function, nor an additive function. The price of two objects together is not the sum of their individual prices. Still, the economists will build nice models, defective by all parts, and no reassembly can bring them any value !&lt;br /&gt;
&lt;br /&gt;
Remember this : this is a true story, and the man who invented the solution did not know what a mathematical model was and did not have any degree at all…&lt;/blockquote&gt;&lt;/div&gt;&lt;br /&gt;
&lt;br /&gt;
[P.S: added Dec 4 (3:40PM CST): It&#39;s a riddle. One could make the point that this type of business model is not robust.  All the countries in the world revise their own laws in order to effectively plug holes such as the one presented here. The ever growing sophistication / complexities of the tax systems in most countries reflects this adaptive behavior. If this system were robust it would be common business practice by now. It may have worked in some countries in the past however. The most important take away from this riddle is that the definition of the price of an item is indeed a difficult problem for which modeling is going to be tricky for a long time]</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/4932897812355790400/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/4932897812355790400' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/4932897812355790400'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/4932897812355790400'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/12/solution-of-selling-from-novosibirsk.html' title='Solution to the &quot;Selling from Novosibirsk&quot; business model riddle'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-503337194654192786</id><published>2010-12-03T09:54:00.000+01:00</published><updated>2010-12-03T09:54:08.790+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="site instructions"/><title type='text'>The RMM LinkedIn Group</title><content type='html'>As some of you may know, the Robust Mathematical Modeling blog has its &lt;a href=&quot;http://www.linkedin.com/groups?mostPopular=&amp;amp;gid=135426&amp;amp;trk=myg_ugrp_ovr&quot;&gt;own group on LinkedIn&lt;/a&gt;. &lt;a href=&quot;http://www.linkedin.com/profile/view?id=35251419&amp;amp;authType=name&amp;amp;authToken=l8n1&amp;amp;goback=.anp_135426_1291365364812_1&amp;amp;trk=anetppl_profile&quot;&gt;Rodrigo Carvalho&lt;/a&gt; is our latest member which put our count to 21.&lt;br /&gt;
&lt;br /&gt;
The &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2010/11/prize-for-modeling-business-with.html&quot;&gt;500 euros prize for the solution to the business model riddle&lt;/a&gt; ends today in about 7 hours. Good luck!</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/503337194654192786/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/503337194654192786' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/503337194654192786'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/503337194654192786'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/12/rmm-linkedin-group.html' title='The RMM LinkedIn Group'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-3658555195342668118</id><published>2010-11-30T10:55:00.001+01:00</published><updated>2010-12-04T22:45:19.334+01:00</updated><title type='text'>A Prize for Modeling a Business with Constraints: Selling from Novosibirsk</title><content type='html'>&lt;div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Sometimes, the model that is given to you has to be tweaked radically in order to explain the situation at hand. With this thought in mind, &lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;Bernard Beauzamy&lt;/a&gt; (the owner of &lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;SCM SA&lt;/a&gt;) has set up a &lt;i&gt;&lt;b&gt;500 euros&lt;/b&gt;&lt;/i&gt; prize for whoever can find a way for the following business model to work (see below). The solutions should be sent before &lt;i&gt;&lt;b&gt;Friday December 3rd, 2010, 5 pm (Paris  local time, that&#39;s GMT+1) &lt;/b&gt;&lt;/i&gt;&lt;b&gt;to scm.sa@orange.fr. &lt;/b&gt;&lt;/div&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;&lt;u&gt;&lt;i&gt;Selling from Novosibirsk&lt;/i&gt;&lt;/u&gt;&lt;br /&gt;
A factory in Novosibirsk produces gloves. Each pair costs 40 rubles to produce, including everything : raw material, salary of workers, machines, transportation, and so on.&lt;br /&gt;
They produce 10,000 pairs and they want to sell them in a country where customs duties are 4/5 of the selling price. They cannot sell at a price higher than 200 rubles each pair, because of local competition and local buying power (at a higher price, nobody would buy). How do they manage to make a profit, and how much do they gain ?&lt;br /&gt;
One cannot cheat with the customs and corruption is forbidden. The price declared for the sale must be the true price. The exchange rate between currencies is assumed to be fixed and is not to be taken into account : everything is stated in the seller&#39;s currency (here the ruble). The solution should work repeatedly, any number of times, without violating any law, between any countries with normal customs. &lt;br /&gt;
Prize offered : 500 Euros, for the best answer received before Friday, December 3rd, 2010, 5 pm (Paris local time). Send answers to scm.sa@orange.fr. Answers may be written in English, French, Russian.&lt;/blockquote&gt;&lt;/div&gt;&lt;div&gt;&lt;/div&gt;&lt;br /&gt;
[Check the solution &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2010/12/solution-of-selling-from-novosibirsk.html&quot;&gt;here&lt;/a&gt;]</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/3658555195342668118/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/3658555195342668118' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/3658555195342668118'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/3658555195342668118'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/11/prize-for-modeling-business-with.html' title='A Prize for Modeling a Business with Constraints: Selling from Novosibirsk'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-2646078674979639148</id><published>2010-11-27T14:14:00.001+01:00</published><updated>2010-11-28T15:45:08.014+01:00</updated><title type='text'>Optimal according to what ?</title><content type='html'>&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1TvVLoKKVrwp6bLuYlArZbcz3jMQSZBIRTyE1d-vywHGJN5D3dNDTySKIOnZZvdTBrcUv7df9GYTvq7nP4Lfl1HoefSz6SowJvwWOkcnnxcJjWI9j0cw9xgaa5DNEytJ-7REj5I4nvpw/s1600/pone.0013283.g004.jpg&quot; imageanchor=&quot;1&quot; style=&quot;clear: left; float: left; margin-bottom: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;242&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1TvVLoKKVrwp6bLuYlArZbcz3jMQSZBIRTyE1d-vywHGJN5D3dNDTySKIOnZZvdTBrcUv7df9GYTvq7nP4Lfl1HoefSz6SowJvwWOkcnnxcJjWI9j0cw9xgaa5DNEytJ-7REj5I4nvpw/s320/pone.0013283.g004.jpg&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;a href=&quot;http://ittakes30.wordpress.com/2010/11/16/redefining-optimal/&quot;&gt;Redefining optimal&lt;/a&gt; is a blog entry by some of the folks at the Department of Systems Biology at Harvard Medical School. It is very nicely written and includes some nice comments. The entry specifically points to a paper by Fernández Slezak D, Suárez C, Cecchi GA, Marshall G, &amp;amp; Stolovitzky G (2010) entitled &lt;a href=&quot;http://www.ncbi.nlm.nih.gov/pubmed/21049094&quot;&gt;When the optimal is not the best: parameter estimation in complex biological models&lt;/a&gt; (PloS one, 5 (10) PMID: 21049094). The abstract and conclusions read:&lt;/div&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;Abstract&lt;br /&gt;
&lt;br /&gt;
BACKGROUND: The vast computational resources that became available during the past decade enabled the development and simulation of increasingly complex mathematical models of cancer growth. These models typically involve many free parameters whose determination is a substantial obstacle to model development. Direct measurement of biochemical parameters in vivo is often difficult and sometimes impracticable, while fitting them under data-poor conditions may result in biologically implausible values.&lt;br /&gt;
&lt;br /&gt;
RESULTS: We discuss different methodological approaches to estimate parameters in complex biological models. We make use of the high computational power of the Blue Gene technology to perform an extensive study of the parameter space in a model of avascular tumor growth. We explicitly show that the landscape of the cost function used to optimize the model to the data has a very rugged surface in parameter space. This cost function has many local minima with unrealistic solutions, including the global minimum corresponding to the best fit.&lt;br /&gt;
&lt;br /&gt;
CONCLUSIONS: The case studied in this paper shows one example in which model parameters that optimally fit the data are not necessarily the best ones from a biological point of view. To avoid force-fitting a model to a dataset, we propose that the best model parameters should be found by choosing, among suboptimal parameters, those that match criteria other than the ones used to fit the model. We also conclude that the model, data and optimization approach form a new complex system and point to the need of a theory that addresses this problem more generally.&lt;/blockquote&gt;&lt;/div&gt;&lt;br /&gt;
&lt;br /&gt;
Evidently, the post would have some relevance to compressive sensing if the model were to be linear, which it is not in this case.</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/2646078674979639148/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/2646078674979639148' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/2646078674979639148'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/2646078674979639148'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/11/optimal-according-to-what.html' title='Optimal according to what ?'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1TvVLoKKVrwp6bLuYlArZbcz3jMQSZBIRTyE1d-vywHGJN5D3dNDTySKIOnZZvdTBrcUv7df9GYTvq7nP4Lfl1HoefSz6SowJvwWOkcnnxcJjWI9j0cw9xgaa5DNEytJ-7REj5I4nvpw/s72-c/pone.0013283.g004.jpg" height="72" width="72"/><thr:total>4</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-1955334419380370980</id><published>2010-11-19T15:13:00.004+01:00</published><updated>2015-10-01T17:59:02.675+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="space"/><title type='text'>The Modeler&#39;s Known Unknowns and Unknown Knowns</title><content type='html'>In &lt;a href=&quot;http://www.wilmott.com/blogs/satyajitdas&quot;&gt;Satyajit Das&#39;s Blog entitled &quot;Fear &amp;amp; Loathing in Financial Products&quot;&lt;/a&gt; one can read the following entry entitled &lt;a href=&quot;http://www.wilmott.com/blogs/satyajitdas/index.cfm/2006/5/24/Fear-and-Loathing--WMD-or-What-are-Derivatives&quot;&gt;WMD or what are derivatives&amp;nbsp; &lt;/a&gt;&lt;br /&gt;
&lt;blockquote&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;i&gt;....During the Iraqi conflict, Donald Rumsfeld, the US Defense Secretary, inadvertently stated a framework for understanding the modern world (12 February 2002 Department of Defense News Briefing). The framework perfectly fits the derivatives business. There were “known knowns” – these were things that you knew you knew. There were “known unknowns” – these were things that you knew you did not know. Then, there were “unknown knowns” – things that you did not know you knew. Finally, there were “unknown unknowns” – things that you did not know you did not know...&lt;/i&gt;&lt;/div&gt;
&lt;/blockquote&gt;
Then &lt;a href=&quot;http://www.wilmott.com/blogs/satyajitdas&quot;&gt;Satyajit&lt;/a&gt;  goes on to clarify this term a little further:&lt;br /&gt;
&lt;blockquote&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;i&gt;....In most businesses, the nature of the product is a known known. We do  not spend a lot of time debating the use of or our need for a pair of  shoes. We also understand our choices – lace up or slip-on, black or  brown. I speak, of course, of men’s shoes here. Women’s shoes, well,  they are closer to derivatives. Derivatives are more complex. You may  not know that you need the product until you saw it – an unknown known.  You probably haven’t got the faintest idea of what a double knockout  currency option with rebate is or does – a known unknown. What should  you pay for this particular item? Definitely, unknown unknown.  Derivatives are similar to a Manolo Blahnik or Jimmy Choo pair of  women’s shoes....     &lt;/i&gt;&lt;/div&gt;
&lt;/blockquote&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
There is also a word for the last one: &lt;a href=&quot;http://www.doubletongued.org/index.php/dictionary/unk_unk/&quot;&gt;Unk-Unk&amp;nbsp;  &lt;/a&gt;&lt;br /&gt;
&lt;blockquote&gt;
n. especially in  engineering, something, such as a problem, that has not been and could  not have been imagined or anticipated; an unknown unknown.&lt;/blockquote&gt;
but I am not sure it fits with the previous definition. Looking up &lt;a href=&quot;http://en.wikipedia.org/wiki/Known_unknown&quot;&gt;wikipedia, we have&lt;/a&gt;:&lt;br /&gt;
&lt;blockquote&gt;
&lt;i&gt;In &lt;a href=&quot;http://en.wikipedia.org/wiki/Epistemology&quot; title=&quot;Epistemology&quot;&gt;epistemology&lt;/a&gt; and &lt;a href=&quot;http://en.wikipedia.org/wiki/Decision_theory&quot; title=&quot;Decision 
theory&quot;&gt;decision theory&lt;/a&gt;, the term &lt;b&gt;unknown unknown&lt;/b&gt; refers to  circumstances or outcomes that were not conceived of by an observer at a  given point in time. The meaning of the term becomes more clear when it  is contrasted with the &lt;b&gt;known unknown&lt;/b&gt;, which refers to  circumstances or outcomes that are known to be possible, but it is  unknown whether or not they will be realized. The term is used in  project planning and decision analysis to explain that any model of the  future can only be informed by information that is currently available  to the observer and, as such, faces substantial limitations and unknown  risk. &lt;/i&gt;&lt;/blockquote&gt;
&lt;br /&gt;
How are these notions applicable to Robust Mathematical Modeling ?&amp;nbsp; &lt;a href=&quot;http://www.johndcook.com/blog/2010/10/18/titanic-effect/&quot;&gt;John Cook reminded me recently of the Titanic effect&lt;/a&gt; presented initially in Jerry Weinberg&#39;s&amp;nbsp; &lt;a href=&quot;http://www.amazon.com/Secrets-Consulting-Giving-Getting-Successfully/dp/0932633013?ie=UTF8&amp;amp;tag=nuitblan-20&amp;amp;link_code=btl&amp;amp;camp=213689&amp;amp;creative=392969&quot; target=&quot;_blank&quot;&gt;Secrets of Consulting: A Guide to Giving and Getting Advice Successfully&lt;/a&gt;&amp;nbsp; &lt;img alt=&quot;&quot; border=&quot;0&quot; src=&quot;http://www.assoc-amazon.com/e/ir?t=nuitblan-20&amp;amp;l=btl&amp;amp;camp=213689&amp;amp;creative=392969&amp;amp;o=1&amp;amp;a=0932633013&quot; height=&quot;1&quot; style=&quot;border: medium none ! important; margin: 0px ! important; padding: 0px ! important;&quot; width=&quot;1&quot; /&gt;&lt;/div&gt;
&lt;blockquote&gt;
&lt;i&gt;The thought that disaster is impossible often leads to an unthinkable  disaster.&lt;/i&gt;&lt;/blockquote&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
When modeling a complex&amp;nbsp; reality, we always strive to make the problem simple and then expect to build a more complex and realistic idealization of that problem. But in the end, even the most complex model is still an idealization of sorts. So every time I get to read about a disaster, grounded in some engineering mistake, I always wonder which part was the known unknown, the unknown known and the unknown unknown.&lt;/div&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
One of these moment that left me thinking happened back in February 2003 during &lt;a href=&quot;http://en.wikipedia.org/wiki/STS-107&quot;&gt;last flight of the Space Shuttle Columbia&lt;/a&gt;. As you know, the spacecraft disintegrated over Texas. It did so because a piece of foam had hit one of its wings fifteen days earlier at launch. That explanation relied on a footage of the launch showing a piece of foam slowly falling from the main booster onto the edge of the wing of the Orbiter. Fifteen days later, when the Orbiter came back to land in Florida, it had a large hole that enabled air at a speed of Mach 17 to enter the inside of the wing, damaging it and thereby destroying the spacecraft. The remains of our experiment showed the temperature had reached well over 600C for a long period of time.&lt;/div&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
I was in the room next to the &lt;a href=&quot;http://spaceflight.nasa.gov/shuttle/reference/mcc/index.html&quot;&gt;MCC&lt;/a&gt; during STS-107 as we were flying &lt;a href=&quot;http://starnav1.tamu.edu/&quot;&gt;an instrument &lt;/a&gt;on top of the orbiter. We watched the take-off, we listened to all the conversations in the com loop &lt;span style=&quot;color: #0000ee;&quot;&gt;between MCC and&lt;/span&gt; the astronauts during the fifteen days it flew (we asked some of the astronauts to manipulate our instrument at one point). At no time was there a hint of a possible mishap. We learned afterwards that &lt;a href=&quot;http://www.iasa.com.au/folders/Safety_Issues/RiskManagement/picciesrefused.htm&quot;&gt;even engineers, in the room next door and who had doubts, had requested&amp;nbsp; imagery from spy sats (but management canceled that request&lt;/a&gt;). However what was the most revealing after the tragedy was that I specifically recall that &lt;a href=&quot;http://starnav1.tamu.edu/&quot;&gt;nobody around me&lt;/a&gt; could conceive the foam could have been making that much damage. None of us thought the speed differential between the foam and the spacecraft at launch could be that large. According to a simple computation based on the video of the launch, a half pound piece of foam hit the leading edge of the Orbiter&#39;s wing at an estimated speed of&amp;nbsp; &lt;a href=&quot;http://caib.nasa.gov/news/documents/impact_velocity.pdf&quot;&gt;924 fps or 1060 km/hr. It&#39;s always the square of the velocity that kills you.&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;center&gt;
&lt;br /&gt;
&lt;object height=&quot;385&quot; width=&quot;400&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://www.youtube.com/v/fcSLVwEYTaU?fs=1&amp;amp;hl=en_US&quot;&gt;&lt;/param&gt;
&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot;&gt;&lt;/param&gt;
&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;/param&gt;
&lt;embed src=&quot;http://www.youtube.com/v/fcSLVwEYTaU?fs=1&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot; allowscriptaccess=&quot;always&quot; allowfullscreen=&quot;true&quot; width=&quot;400&quot; height=&quot;385&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;br /&gt;
&lt;/center&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
More photos of the test performed in San Antonio on July  7, 2003 to recreate what happened can be found &lt;a href=&quot;http://caib.nasa.gov/photos/sub_section74e2.html?category=&amp;amp;main=materials_testing&amp;amp;sub=impact_test_20030707&amp;amp;item=&amp;amp;thumbnails=yes&quot;&gt;here&lt;/a&gt;.&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5R6DIbr7GTdGB4jLUyvjxlcvAF5tE7JV5ft2JsYX0MjV5DJLwpvdqYCSnXCScYD06sIJHIAVJLpy98Mrejx6gs7hH_NI_4CDCOi_FnULfvxpWIwi_OAfbcdvYkXhGr5SHmY87zZsApi4/s1600/caib-swri-foam-impact-test.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;228&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5R6DIbr7GTdGB4jLUyvjxlcvAF5tE7JV5ft2JsYX0MjV5DJLwpvdqYCSnXCScYD06sIJHIAVJLpy98Mrejx6gs7hH_NI_4CDCOi_FnULfvxpWIwi_OAfbcdvYkXhGr5SHmY87zZsApi4/s400/caib-swri-foam-impact-test.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
As one can see from the small movie above, all previous tests had been performed with small pieces of foam. Further, attention had always been focused on the tiles underneath the Orbiter - never on the leading edge of the wings-. The video of the test performed in San Antonio, several months later, had everyone gasping in horror when the piece of foam opened a large hole in the leading edge of the wing. The most aggravating part of the story is that the Columbia flew for fifteen days with very little care about this issue while the Shuttle program had already seen take-offs with several near dangerous hits in the past.&lt;br /&gt;
&lt;br /&gt;
On an unrelated issue, &lt;a href=&quot;http://www.edwardtufte.com/bboard/q-and-a-fetch-msg?msg_id=0001yB&quot;&gt;Ed Tufte also pointed out very clearly how the communication style using Powerpoint was a point of failure&lt;/a&gt; in the decision making process of deciding whether the foam strike&amp;nbsp; was a problem or not..&amp;nbsp;&lt;/div&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZYN7hbZNXNFKYG5Swv8tTtJ4Fqwblp3ppp9tP2e6ozdb6D7k1G6hyYfXTHw_tnedPV8VqKSOzNG1-mNLtcpKZhnlNWN0AWpmq2bWudwf7vjr4rgx62BuDIfzqeLoF68hvyfvbcEYSArc/s1600/tuft-columbia.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;240&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhZYN7hbZNXNFKYG5Swv8tTtJ4Fqwblp3ppp9tP2e6ozdb6D7k1G6hyYfXTHw_tnedPV8VqKSOzNG1-mNLtcpKZhnlNWN0AWpmq2bWudwf7vjr4rgx62BuDIfzqeLoF68hvyfvbcEYSArc/s400/tuft-columbia.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
Of note, the communication to managment did not clearly delineate that there was absolutely no experience with such a large chunk of foam (experiments had been performed with 3 inch cube &quot;bullets&quot; vs an actual impactor with a volume of more than 1920 inch cube).&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
What were the known unknowns, the unknown knowns and the unknown unknowns in this instance ? First let me reframe all the categories of knowns/unknowns:for a modeler of reality or the engineers: With the help of &lt;a href=&quot;http://en.wikipedia.org/wiki/Unknown_unknown&quot;&gt;wikipedia&lt;/a&gt;, let me summarize them as follows: To a modeler&lt;/div&gt;
&lt;ul&gt;
&lt;li&gt;the &lt;b&gt;known known&lt;/b&gt; refers to circumstances or outcomes that are known to be possible, it is known that they will be realized with some probability (based on adequate experiments/data)&lt;/li&gt;
&lt;li&gt;the &lt;b&gt;known unknown&lt;/b&gt; refers to circumstances or outcomes that are known to be possible, but it is unknown whether or not they will be realized (no data or very low probability/extreme events).&lt;/li&gt;
&lt;li&gt;the &lt;b&gt;unknown unknown&lt;/b&gt; refers to circumstances or outcomes that were not conceived of by a modeler at a given point in time. &lt;/li&gt;
&lt;li&gt;the &lt;b&gt;unknown known &lt;/b&gt;refers to circumstances or outcome&lt;b&gt; &lt;/b&gt;a modeler intentionally refuse to acknowledge that he/she knows&lt;/li&gt;
&lt;/ul&gt;
Looking back at the Columbia mishap, how can we categorize the different mistakes made: &lt;br /&gt;
&lt;ul&gt;
&lt;li&gt;the foam hitting the RCC was a &lt;b&gt;unknown known&lt;/b&gt; to the modelers. People had done their homework and knew that:&lt;/li&gt;
&lt;ul&gt;
&lt;li&gt;falling pieces were hitting the orbiter at every launch&lt;/li&gt;
&lt;li&gt;some launches had bad tile destruction because of falling pieces&lt;/li&gt;
The engineers went through a series of tests that they eventually put in a database. Most past foam hits fell in the &lt;b&gt;known knowns&lt;/b&gt;, as pieces of foam were clearly fitting dimensions used in the database. They knew foam could fall on the RCC instead of the tile yet did not do tests or felt the tests were necessary. At issue is really the fact that there was an assumption that the RCC was tougher than tiles. It actually is more brittle but then a lot of things are brittle when hit with a large chunk of something at 1000 km/hr.&lt;/ul&gt;
&lt;li&gt;&amp;nbsp;The speed of the foam was also a &lt;b&gt;known known&lt;/b&gt; to the modelers. It could be computed right after the launch and was within the range listed in the database mentioned above.&lt;/li&gt;
&lt;li&gt;A &lt;b&gt;known unknown&lt;/b&gt; to the modeler was the impact effect of a &lt;u&gt;&lt;b&gt;very large&lt;/b&gt;&lt;/u&gt; piece of foam on the leading edge of the wing. This is reflected in the size fragments used in the database. There was simply no data.&lt;/li&gt;
&lt;li&gt;An &lt;b&gt;unknown unknown&lt;/b&gt; to the manager and the rest of the world was the eventual demise of the whole spacecraft fifteen days later due to this impact. To the modeler, I believe the demise of the spacecraft was rather a&lt;b&gt; unknown known&lt;/b&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;b&gt;Unknown unknowns&lt;/b&gt; are clearly outside of most modeling for a multitude of reasons. Robust mathematical modeling ought to provide some warning about &lt;b&gt;known unknowns&lt;/b&gt; and most importantly provide a framework for not allowing &lt;b&gt;unknown knowns&lt;/b&gt; to go unnoticed&lt;b&gt; by either the engineers or their management.&lt;/b&gt; &lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;
[1] &lt;a href=&quot;http://astore.amazon.com/robustmath-20/detail/0961392177&quot;&gt;Beautiful Evidence, Edward Tufte.&lt;/a&gt;&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/1955334419380370980/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/1955334419380370980' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/1955334419380370980'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/1955334419380370980'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/11/modeler-known-unknowns-and-unknown.html' title='The Modeler&#39;s Known Unknowns and Unknown Knowns'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5R6DIbr7GTdGB4jLUyvjxlcvAF5tE7JV5ft2JsYX0MjV5DJLwpvdqYCSnXCScYD06sIJHIAVJLpy98Mrejx6gs7hH_NI_4CDCOi_FnULfvxpWIwi_OAfbcdvYkXhGr5SHmY87zZsApi4/s72-c/caib-swri-foam-impact-test.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-3383372576206287806</id><published>2010-11-10T15:47:00.003+01:00</published><updated>2010-11-11T11:25:15.371+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="RMMExample"/><title type='text'>RMM Example #2: Spacecraft Thermal Management</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWEwyW-h9ppiT6jw8Syjne98ucDrVAvCCkD-w6wfA53ENZZpv62QWslz2GSNNRmDAaH-Obrm72ZJBOxqoMC8mtsRkmbJi7LLKENf1eHIx4CLbjZVyRGEtiS1_qhIjHYCcUMu1AWifshlo/s1600/envisat.jpg&quot; imageanchor=&quot;1&quot; style=&quot;clear: left; float: left; margin-bottom: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;320&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWEwyW-h9ppiT6jw8Syjne98ucDrVAvCCkD-w6wfA53ENZZpv62QWslz2GSNNRmDAaH-Obrm72ZJBOxqoMC8mtsRkmbJi7LLKENf1eHIx4CLbjZVyRGEtiS1_qhIjHYCcUMu1AWifshlo/s320/envisat.jpg&quot; width=&quot;319&quot; /&gt;&lt;/a&gt;&lt;/div&gt;When designing Spacecrafts, one of the major issue aside for designing its primary instruments is to devise its &lt;a href=&quot;http://webserver.dmt.upm.es/%7Eisidoro/tc3/Spacecraft%20Thermal%20Control.htm&quot;&gt;thermal management&lt;/a&gt; (i,e, managing the way power produced by the spacecraft can be removed so that it does not overheat). The thermal management of Spacecrafts requires solving different sets of issues with regards to modeling. Because spacecrafts generally live in low earth or geostationary orbit, the only way to remove power generated on the spacecraft is through radiation out&amp;nbsp; of its radiators. This radiator point is the lowest temperature the spacecraft will experience. If the spacecraft is well conditioned all other parts of the spacecraft will have higher temperature no matter what. The main issue of thermal modeling for spacecraft design is really making sure that all the other points of the spacecraft will be within the temperature bounds they are designed for: i.e. The thermal rating for a &lt;a href=&quot;http://en.wikipedia.org/wiki/DC-to-DC_converter&quot;&gt;DC/DC converter&lt;/a&gt; is widely different than that of a simple CMOS or the lens of a camera. Hence computing the radiator temperature is of paramount importance and can be done very quickly with a &lt;u&gt;&lt;i&gt;&lt;b&gt;one node analysis&lt;/b&gt;&lt;/i&gt;&lt;/u&gt;. Yes, you read this right, at the beginning, there is no need for Finite Element computations in spacecraft analysis except maybe for very specific components and very specific conditions. The most important computation is figuring out this &lt;a href=&quot;http://webserver.dmt.upm.es/%7Eisidoro/tc3/Spacecraft%20Thermal%20Modelling%20and%20Testing.htm&quot;&gt;one spacecraft-node&lt;/a&gt; analysis. In terms of modeling, it doesn&#39;t get any simpler and it is robust. The issues that tend to crop up are when one gets into the detailed power consumption and thermal energy flow within the spacecraft as more detailed constraints. are added To summarize the issues, let me try to follow the list of issues that is making up the definition of problems needing Robust Mathematical Modeling as a guideline:&lt;/div&gt;&lt;br /&gt;
&lt;i&gt;&lt;b&gt;&lt;u&gt;1. The laws describing the phenomena are not completely known&lt;/u&gt;&lt;/b&gt;&lt;/i&gt; ;&lt;br /&gt;
&lt;br /&gt;
In fact, in this case the laws are known but there are large uncertainties at many different levels:&lt;br /&gt;
&lt;ul style=&quot;text-align: justify;&quot;&gt;&lt;li&gt;each element of the spacecraft has a thermal conductance, but since one is dealing with heterogeneous elements like a CMOS or a slab of aluminum, the designer is constrained into a lumped analysis involving a delicate weighting.&lt;/li&gt;
&lt;li&gt;the &lt;a href=&quot;http://www.linkedin.com/groupItem?view=&amp;amp;gid=149975&amp;amp;type=member&amp;amp;item=32144444&amp;amp;qid=ed1f6b92-5cee-405c-9031-cd87469f1359&amp;amp;goback=.gmp_149975&quot;&gt;thermal contact resistances / conductances of the electronics&lt;/a&gt; are generally unknowns in terms of performance especially in vacuum. Most information on the electronics is given when convection is available (for ground use). Even when environment is known, electronics components are very hard to evaluate. See this very interesting thread on &lt;a href=&quot;http://www.linkedin.com/groupItem?view=&amp;amp;gid=149975&amp;amp;type=member&amp;amp;item=32144444&amp;amp;qid=ed1f6b92-5cee-405c-9031-cd87469f1359&amp;amp;goback=.gmp_149975&quot;&gt;LinkedIn&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;the thermal contact conductance of two pieces of metals connected to each other through nuts and bolts is by no means a trivial subject. The contact conductance certainly depends on&amp;nbsp; how much torque was put on the washer/nuts/bolts and the level of vacuum. &lt;/li&gt;
&lt;li&gt;the space environment produces different heating and cooling conditions that are inherently different based on the positioning of the spacecraft, its orbit, etc...&lt;/li&gt;
&lt;li&gt;in order to regulate temperature efficiently, cloth and paints are covering the spacecraft for the duration of its life. There are uncertainties with regards to how these decay over time and most computations include &lt;a href=&quot;http://webserver.dmt.upm.es/%7Eisidoro/dat1/Thermooptiical.htm&quot;&gt;Beginning Of Life (BOL) and End Of Life (EOL) estimates&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;An element of confusion is a mathematical one too. Since most of the thermal power is managed through conduction, radiation transport (a nonlinear term in T^4) is generally modeled as a linear node. When temperature gets too high, the conductance node varies with temperature to follow the nonlinear T^4 term. &lt;/li&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;b&gt;&lt;i&gt;&lt;u&gt;2. The data are missing or corrupted &lt;/u&gt;&lt;/i&gt;&lt;/b&gt;;&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;Spacecraft are generally designed with a clear emphasis on reducing its weight at the subsystem or bus level. A GEO satellite maker would rather put one more transponder bringing revenue on its spacecraft than add additional instrumentation to provide data to the ground. Experimental data is rare in spacecraft design because real conditions are rarely fully instrumented. Tests are performed at every iteration of the spacecraft design though, but they are not total reproduction of the actual thermal environment sustained by the future spacecraft. For instance, Sun lamps only produce some subset of the wavelengths given by the Sun, so it difficult to find &lt;a href=&quot;http://webserver.dmt.upm.es/%7Eisidoro/dat1/Thermooptiical.htm&quot;&gt;the thermo-optical properties of some paints or the efficiency of some solar cells&lt;/a&gt;. While vacuum tests get rid of the convection issue, it can do little to evaluate the performance of systems that rely on convection inside said systems such as &lt;a href=&quot;http://www.1-act.com/advanced-technologies/heat-pipes/index.php&quot;&gt;loop-heat pipes&lt;/a&gt;.&lt;/div&gt;&lt;br /&gt;
&lt;u&gt;&lt;i&gt;&lt;b&gt;3. The objectives are multiple and contradictory&lt;/b&gt;&lt;/i&gt;&lt;/u&gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;Four words: Hot Case, Cold Case. The worst thermal environments are generally sought in order to provide acceptable bounds during the lifetime of the spacecraft. It is no small secret to say that these two objectives are contradictory. One of the two cases, the cold case generally, also generate additional mass to remedy to it. Adding mass for a subsystem is not considered optimal as every kilogram in Low Earth Orbit cost about $10,000. The number is obviously higher for Geostationary Orbit. Another surprising element is that sometimes, the colder case is not an obvious one so the solver really has to go through many different types of iterations to define what that case it.&lt;br /&gt;
The objective to have the lightest spacecraft possible also flies in the face of thermal &quot;equilibiuim&quot;. the less thermal mass a spacecraft has, the less capable it will be able to handle environmental swings. &lt;a href=&quot;http://en.wikipedia.org/wiki/CubeSat&quot;&gt;Cubesats&lt;/a&gt;, for instance, fall in this extreme category of spacecraft for which thermal fluctuations can be very large (and bring about a possible thermal &quot;event:&quot; such electronics board cracking, etc....)&lt;/div&gt;&lt;br /&gt;
As the design progresses the modeling is iterated upon testing (from components to whole spacecrafts) but as &lt;a href=&quot;http://webserver.dmt.upm.es/%7Eisidoro/&quot;&gt;Isidoro Martinez&lt;/a&gt; points out, this really is just the beginning of a long processes fitting models with the result of the few experimentl data gathered through the lengthy Thermal Balance and Themal Vacuum Tests (TB/TV tests: From &lt;a href=&quot;http://webserver.dmt.upm.es/%7Eisidoro/tc3/Spacecraft%20Thermal%20Modelling%20and%20Testing.htm#_Toc273522149&quot;&gt;here&lt;/a&gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;&lt;h3&gt;&lt;a href=&quot;http://www.blogger.com/post-edit.g?blogID=319304714858376628&amp;amp;postID=3383372576206287806&quot; name=&quot;_Toc273522149&quot;&gt;&lt;span lang=&quot;EN-GB&quot;&gt;Spacecraft thermal testing&lt;/span&gt;&lt;/a&gt;&lt;span lang=&quot;EN-GB&quot;&gt;&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/h3&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span lang=&quot;EN-GB&quot;&gt;Measurement is the ultimate validation of real behaviour of a physical system. But  tests are expensive, not only on the financial budget but on time demanded and  other precious resources as qualified personnel. As a trade-off, mathematical  models are developed to provide multi-parametric behaviour, with the hope that,  if a few predictions are checked against physical tests, the model is  validated to be reliable to predict the many other situations not actually tested.&lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;span lang=&quot;EN-GB&quot;&gt;Final testing of a large spacecraft for acceptance is at the present limit of technology, since very large vacuum chambers, with a powerful collimated solar-like beam, and walls kept at cryogenic temperatures, must be  provided (and the spacecraft able to be deployed, and rotated in all directions,  while measuring).&amp;nbsp; &lt;o:p&gt;&lt;/o:p&gt;&lt;/span&gt;&lt;/div&gt;&lt;div class=&quot;MsoNormal&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;span lang=&quot;EN-GB&quot;&gt;Typical temperature discrepancy between the most advanced numerical simulation  and the most expensive experimental tests may be some 2 K for most delicate  components in integrated spacecraft (much lower when components are separately  tested).&lt;/span&gt;&lt;/blockquote&gt;&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUofBVm35YIInTM5DBRg0DzuLXBixT6ytsP-X-rkRp8h7yl0w-_kvmQEqQU6mqB89YKs3ATZzlihP-hUxqtoJbUcEZ5chICNSE3VM7HIzPuYyALutrEsB3SPDiOnOaYv4VMeCoPRwNBHI/s1600/jsc2008e126009Trains.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;400&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgUofBVm35YIInTM5DBRg0DzuLXBixT6ytsP-X-rkRp8h7yl0w-_kvmQEqQU6mqB89YKs3ATZzlihP-hUxqtoJbUcEZ5chICNSE3VM7HIzPuYyALutrEsB3SPDiOnOaYv4VMeCoPRwNBHI/s400/jsc2008e126009Trains.jpg&quot; width=&quot;300&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
Picture of the large vacuum chamber at &lt;a href=&quot;http://www.jsc.nasa.gov/jscfeatures/articles/000000770.html&quot;&gt;NASA JSC where the Apollo LEM was tested&lt;/a&gt;. &lt;br /&gt;
&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid0cIhsjCOoWEi4ZSG-3lQSGTvH4sZ0lb_6Dlha_8kSDl3AUDo_XcoVVGjtcN0VTwTJuAcYHFkfnoG4Ogcqb_EcnA5TrNnkVXVRFaHJnmKXm2aC_XQ4f8UcJE9pTm_ew2k3pWNloqxR_k/s1600/image049.gif&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;361&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEid0cIhsjCOoWEi4ZSG-3lQSGTvH4sZ0lb_6Dlha_8kSDl3AUDo_XcoVVGjtcN0VTwTJuAcYHFkfnoG4Ogcqb_EcnA5TrNnkVXVRFaHJnmKXm2aC_XQ4f8UcJE9pTm_ew2k3pWNloqxR_k/s400/image049.gif&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;An item of considerable interest to the thermal designer is to reduce substantially the time it takes to fit the few experimental results to the models. This part is by no means trivial given the nonlinearities of radiation heat transfer. The time required to fit models to the experiments is critical as tests are expensive and lengthy. From &lt;a href=&quot;http://articles.adsabs.harvard.edu//full/2004ESASP.558..113B/0000113.000.html&quot;&gt;here&lt;/a&gt; the timeline for the TB/TV test of the Rosetta spacecraft that took thirty days:&lt;/div&gt;&lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrOdBHyLlMwhxVtRyHVdEHyXHbPBVwOvbCX5QI7a5XHvROIwWGOw98rxXrIIOaf7QWjM1EgqE7njZcQmug3aKkCMnRDyy1QdE78_pOstiSoACp2JAbwoyLFD2EQFZTnmoYQwgZ3x22hd4/s1600/tbtv-tests.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;270&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrOdBHyLlMwhxVtRyHVdEHyXHbPBVwOvbCX5QI7a5XHvROIwWGOw98rxXrIIOaf7QWjM1EgqE7njZcQmug3aKkCMnRDyy1QdE78_pOstiSoACp2JAbwoyLFD2EQFZTnmoYQwgZ3x22hd4/s400/tbtv-tests.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Additional information can be found in:&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;http://www.amazon.com/gp/product/188498911X?ie=UTF8&amp;amp;tag=nuitblan-20&amp;amp;linkCode=xm2&amp;amp;creativeASIN=188498911X&quot;&gt;Spacecraft Thermal Control Handbook: Fundamental Technologies&amp;nbsp;&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://astore.amazon.com/nuitblan-20?_encoding=UTF8&amp;amp;node=8&quot;&gt;Space Engineering Bookstore&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.crtech.com/index.html&quot;&gt;CRTech&lt;/a&gt; a producer a solvers dedicated to lumped parametric approach to thermal engineering (includes Space modules)&lt;/li&gt;
&lt;li&gt; &lt;a href=&quot;http://www.crtech.com/docs/papers/2002/Optimizing.pdf&quot;&gt;Nonlinear Programming Applied  to Thermal and Fluid Design Optimization (ITherm 2002)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.crtech.com/docs/papers/00ICES-266.pdf&quot;&gt;Parametric Thermal  Analysis and Optimization Using Thermal Desktop (ICES 2000)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://articles.adsabs.harvard.edu//full/2004ESASP.558..113B/0000113.000.html&quot;&gt;The Rosetta Spacecraft: Test and Verification &lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/3383372576206287806/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/3383372576206287806' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/3383372576206287806'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/3383372576206287806'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/11/rmm-example-2-spacecraft-thermal.html' title='RMM Example #2: Spacecraft Thermal Management'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjWEwyW-h9ppiT6jw8Syjne98ucDrVAvCCkD-w6wfA53ENZZpv62QWslz2GSNNRmDAaH-Obrm72ZJBOxqoMC8mtsRkmbJi7LLKENf1eHIx4CLbjZVyRGEtiS1_qhIjHYCcUMu1AWifshlo/s72-c/envisat.jpg" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-5287803823367227612</id><published>2010-11-03T10:49:00.001+01:00</published><updated>2010-11-03T12:40:06.138+01:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="RMMExample"/><title type='text'>RMM Example #1: Ground Motion Models for Probabilistic Seismic Hazard Analysis</title><content type='html'>&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixEIEqOAwXWyTcwRP_bQFw-l_HZKaawitNBZ7kCqjMTp_MvbfMoRY2TNE0bCprJUd9afoJh9yh8k5YaGuWVJb2qxikxS56b4ESuCEW4MBH_jbwSzr-iQ-DAEZeSbmFHaV9wAjln54yYuM/s1600/world-seismic.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;180&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixEIEqOAwXWyTcwRP_bQFw-l_HZKaawitNBZ7kCqjMTp_MvbfMoRY2TNE0bCprJUd9afoJh9yh8k5YaGuWVJb2qxikxS56b4ESuCEW4MBH_jbwSzr-iQ-DAEZeSbmFHaV9wAjln54yYuM/s400/world-seismic.JPG&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;Following up on my &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2010/10/call-for-help-bleg-seeking-technical.html&quot;&gt;bleg&lt;/a&gt;, &lt;a href=&quot;http://www.geo.uni-potsdam.de/mitarbeiter/Kuehn/kuehn.html&quot;&gt;&lt;span class=&quot;gI&quot;&gt;&lt;span class=&quot;go&quot;&gt;Nicolas Kuehn&lt;/span&gt;&lt;/span&gt;&lt;/a&gt;&lt;span class=&quot;gI&quot;&gt;&lt;span class=&quot;go&quot;&gt; kindly responded to my query with the following e-mail:&lt;/span&gt;&lt;/span&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;Dear Igor,&lt;br /&gt;
&lt;br /&gt;
I am writing to you because I saw your blogpost with a call for help/bleg. I am not sure if this is what you had in mind, but I might have something to contribute.&lt;br /&gt;
&lt;br /&gt;
I am a seismologist working on ground motion models for probabilistic seismic hazard analysis (PSHA). In PSHA, we try to estimate the exceedance rate of a destructive ground motion level for a particular site (e.g. a nuclear power plant). It basically comes down to the following:&lt;br /&gt;
&lt;ol&gt;&lt;li&gt;What is the probability that an earthquake with magnitude M occurs in a distance R from the site in a particular time period.&lt;/li&gt;
&lt;li&gt;Given M and R, what is the probability that a certain ground motion level will be exceeded.&lt;/li&gt;
&lt;li&gt;Integrate over all magnitude and distance combinations.&lt;/li&gt;
&lt;/ol&gt;&lt;br /&gt;
I am working on part 2. Here, we want to estimate the conditional probability of the ground motion parameter Y given magnitude and distance. Y can be something like peak ground acceleration. Usually, a lognormal distribution is assumed, and we get something like this:&lt;br /&gt;
&lt;br /&gt;
log Y = f(M,R)+epsilon&lt;br /&gt;
&lt;br /&gt;
The parameters of f are estimated from large strong motion datasets, which consist of recordings of ground motion from large earthquakes at seismic stations.&lt;br /&gt;
Now comes the part where you are probably interested in. The estimation of f is not easy. There is physical knowledge about the relationships between Y and M and R, but there are still unexplained effects. For example, physical models make the assumption that an earthquake is a point source, but large earthquakes can have fault dimensions of up 10s or 100s of kilometers. This has to be taken into account. There are also other variables (site effects, rupture directivity effects and others) which influence ground motion. Not for all of them exist physical knowledge about the exact relation.&lt;br /&gt;
Another problem is missing data. Site effects are usually quantified by the shear wave profile under the station, but this is not always available. Similar, there is missing data for other variables as well.&lt;br /&gt;
&lt;br /&gt;
There is also the problem that the data is not independent. One normally has many recordings from one earthquake at different stations, which are not independent. Similarly, you can have on station recording several earthquakes.&lt;br /&gt;
&lt;br /&gt;
As I said, I am not sure if this is what you had in mind as problems, but if you are interested, I can provide you with more details....&lt;/blockquote&gt;&lt;/div&gt;For illustration purpose here a graph showing the &lt;a href=&quot;http://demonstrations.wolfram.com/SeismicityOfGermany/&quot;&gt;seismicity of Germany&lt;/a&gt; ( a fact I was largely unaware of) &lt;br /&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr9jOqF2T5KqPBZz2hWOfUJ1ujZ4x3j4h3glDrisyBXu6MwMq_N3QYZ-tIooef2Lbcnh9dcvQygBHvf-W5vbHGjByEnz3WP_puQprfoVHK26HHophi-949P5MbbiXeTS9XJ5PKn-O0RAU/s1600/germany-earthquakes.gif&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;400&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgr9jOqF2T5KqPBZz2hWOfUJ1ujZ4x3j4h3glDrisyBXu6MwMq_N3QYZ-tIooef2Lbcnh9dcvQygBHvf-W5vbHGjByEnz3WP_puQprfoVHK26HHophi-949P5MbbiXeTS9XJ5PKn-O0RAU/s400/germany-earthquakes.gif&quot; width=&quot;354&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&amp;nbsp;back to Nicolas&#39; problem which is not centered on just Germany,&amp;nbsp; I then responded with the following:&lt;br /&gt;
&lt;br /&gt;
&lt;blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Thanks for the e-mail. I like your problem. ... I am curious on how: - the lognormality of Y was assumed&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;is there a predetermined shape for f &lt;/li&gt;
&lt;li&gt;how was this shape found how ? &amp;nbsp;&lt;/li&gt;
&lt;li&gt;how do you account for missing data ?&lt;/li&gt;
&lt;/ul&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
Any other detail is also interesting as well.&lt;/div&gt;&lt;/blockquote&gt;&lt;a href=&quot;http://www.geo.uni-potsdam.de/webpage/member-details/show/62.html&quot;&gt;Nicolas&lt;/a&gt; then responded with:&lt;br /&gt;
&lt;br /&gt;
&lt;blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Hi Igor,&lt;br /&gt;
&lt;br /&gt;
I&#39;m glad that you like the problem. I am also interested to hear about problems from other fields, and how these fields cope. ... Below, I have provided some more details.&lt;br /&gt;
&lt;br /&gt;
- Lognormality: There are two reasons why this is assumed. One is data. I have attached a normal probability plot of ground-motion data from a strong motion dataset (about 3000 datapoints). The plot is taken from some lecture, and you can see that the points follow a straight line, but there are some deviations in the tails.&lt;br /&gt;
The second one is more theoretical: The Fourier spectrum of seismic ground motion can be written as F(f) = E(f)xP(f)xS(f), where f is frequency and E(f) is the source component, P(f) is the path and S(f) is the site part (so F(f) is a convolution of these parts). This is a multiplication of random variables, which leads to a lognormal distribution.&lt;br /&gt;
There is one problem with the assumption of lognormality which is widely recognized, but no satisfying solution has been proposed: It assigns a nonzero probability for very high, physically impossible ground motions (the tails). This becomes especially important for very low exceedance rates, where these tails become important. Critical facilities such as nuclear power plants or waste deposits must be designed to withstand ground motions with very low exceedance rates.&lt;br /&gt;
&lt;br /&gt;
* &lt;u&gt;shape for f&lt;/u&gt;: This is also based on the Fourier spectrum. The source part is proportional to e^M, the path part is proportional to 1/R e^-R, so the most simple model becomes:&lt;br /&gt;
f(M,R) = a +bM-cLog(R)-dR&lt;br /&gt;
This is based on a model that treats an earthquake as a point. For extended faults, where each part of the fault can emit seismic waves, there is interference and so on. Two observations can be made:&lt;br /&gt;
1. a magnitude saturation, where the larger the magnitude, the less the difference in ground motion. This is modeled usually either by a m^2 term in f or by a piecewise linear function for the magnitude.&lt;br /&gt;
2. There is an interaction term between M and R (the larger the magnitude, the less the decrease of Y with distance). This is modeled either as (c+c1M)Log(R) or by cLog(R+de^(gM)).&lt;br /&gt;
&lt;br /&gt;
Site effects are usually modeled as discrete variables (the near surface underground is classified as ROCK, STIFF SOIL, SOFT SOIL), each with an individual coefficient. There are different ways how people classify the site conditions, though. In newer studies, one finds also the use of Vs30, which is the average shear wave velocity in the upper 30m, as a predictor variable.&lt;br /&gt;
&lt;br /&gt;
Then, there is the style-of-faulting, which measures how the fault ruptures (horizontally or vertically). It is a discrete, three valued variable.&lt;br /&gt;
&lt;br /&gt;
This leads to this form, which forms the basis of most ground-motion models:&lt;br /&gt;
f = a_1+a_2M+(a_3+a_4M) Log (R)+a_5R+a_6 SS+a_7 SA+a_8 FN+a_9FR,&lt;br /&gt;
where SS is 1 if the site class is STIFF SOIL and 0 otherwise, SA is one of the site class is SOFT SOIL, and FN is 1 if the style-of-faulting is normal, FR is 1 if the style-of-faulting is reverse.&lt;br /&gt;
&lt;br /&gt;
Newer models take into account more variables and effects (fault dip, nonlinear site amplification, whether the earthquake ruptures the surface or not).&lt;br /&gt;
&lt;br /&gt;
* &lt;u&gt;missing data&lt;/u&gt;: This is treated differently&lt;br /&gt;
Sometimes, it can be possible to estimate one variable from a related one. E.g., there exist different magnitude measures (moment magnitude, local magnitude, body wave magnitude), on there exist conversion rules between them. There exist also different distance measures, which can be converted.&lt;br /&gt;
If a station has no shear wave velocity profile, one can look at the geology. This all introduces uncertainty, though.&lt;br /&gt;
&lt;br /&gt;
What is also sometimes done is first determining the coefficients of magnitude and distance (for which information is usually complete), and later determine the remaining coefficients using the data that is available.&lt;br /&gt;
&lt;br /&gt;
I have tried to determine the coefficients of a model using Bayesian inference via &lt;a href=&quot;http://goo.gl/TbBQu&quot;&gt;OpenBUGS&lt;/a&gt;, where I treated the missing data as parameters for which a posterior distribution was determined.&lt;br /&gt;
&lt;br /&gt;
Cheers,&lt;br /&gt;
Nico&lt;/div&gt;&lt;/blockquote&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbvxYYgBpLXnWsm4IBCf3wXKY9E-gbvPdDCC2decKwcxeNd_tIQiWNNzEq1mvDmdvZtQuEo08cBp5wj1iBkM6e92Uqp3bLJ5K3n83VzY7gjYKasGovM_y5H5fi6h7Q0E8-CzKP3FobTTg/s1600/log-proba-theoretical-vs-data.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;185&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhbvxYYgBpLXnWsm4IBCf3wXKY9E-gbvPdDCC2decKwcxeNd_tIQiWNNzEq1mvDmdvZtQuEo08cBp5wj1iBkM6e92Uqp3bLJ5K3n83VzY7gjYKasGovM_y5H5fi6h7Q0E8-CzKP3FobTTg/s320/log-proba-theoretical-vs-data.JPG&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;I am leaving the problematic as is for the time being. In many respects, the type of issues mentioned in Nicolas&#39; problem are very reminiscent of a whole slew of problems found in many different science and engineering fields and subfields. One more thing, &lt;a href=&quot;http://www.geo.uni-potsdam.de/mitarbeiter/Kuehn/kuehn.html&quot;&gt;&lt;span class=&quot;gI&quot;&gt;&lt;span class=&quot;go&quot;&gt;Nicolas&lt;/span&gt;&lt;/span&gt;&lt;/a&gt; also has a presentation and a poster that talks about some part of his implementation in &lt;a href=&quot;http://goo.gl/PxiEk&quot;&gt;A Bayesian Ground Motion Model for Estimating the Covariance Structure of Ground Motion Intensity Parameters&lt;/a&gt; and &lt;a href=&quot;http://goo.gl/9qOrK&quot;&gt;A hierarchical Global Ground Motion Model to Take Into Account Regional Differences.&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Here is a &lt;a href=&quot;http://goo.gl/iDOCT&quot;&gt;video of the presentation&lt;/a&gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;object width=&quot;400&quot; height=&quot;385&quot;&gt;&lt;param name=&quot;movie&quot; value=&quot;http://www.youtube.com/v/KaR04EqXid0?fs=1&amp;amp;hl=en_US&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowFullScreen&quot; value=&quot;true&quot;&gt;&lt;/param&gt;&lt;param name=&quot;allowscriptaccess&quot; value=&quot;always&quot;&gt;&lt;/param&gt;&lt;embed src=&quot;http://www.youtube.com/v/KaR04EqXid0?fs=1&amp;amp;hl=en_US&quot; type=&quot;application/x-shockwave-flash&quot; allowscriptaccess=&quot;always&quot; allowfullscreen=&quot;true&quot; width=&quot;400&quot; height=&quot;385&quot;&gt;&lt;/embed&gt;&lt;/object&gt;&lt;br /&gt;
&lt;br /&gt;
This is a very nice example where methodologies for Robust Mathematical Modeling are indeed needed, we can see here that&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;The data are missing or corrupted ;&lt;/li&gt;
&lt;li&gt;The laws  describing the phenomena are not completely known ;&lt;/li&gt;
&lt;/ul&gt;&lt;br /&gt;
&lt;br /&gt;
Thanks &lt;a href=&quot;http://www.geo.uni-potsdam.de/mitarbeiter/Kuehn/kuehn.html&quot;&gt;&lt;span class=&quot;gI&quot;&gt;&lt;span class=&quot;go&quot;&gt;Nicolas&lt;/span&gt;&lt;/span&gt;&lt;/a&gt; !&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/5287803823367227612/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/5287803823367227612' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5287803823367227612'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5287803823367227612'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/11/rmm-example-1-ground-motion-models-for.html' title='RMM Example #1: Ground Motion Models for Probabilistic Seismic Hazard Analysis'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEixEIEqOAwXWyTcwRP_bQFw-l_HZKaawitNBZ7kCqjMTp_MvbfMoRY2TNE0bCprJUd9afoJh9yh8k5YaGuWVJb2qxikxS56b4ESuCEW4MBH_jbwSzr-iQ-DAEZeSbmFHaV9wAjln54yYuM/s72-c/world-seismic.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-5974633014225793191</id><published>2010-10-29T12:25:00.000+02:00</published><updated>2010-10-29T12:25:48.441+02:00</updated><title type='text'>Robust Optimization and the Donoho-Tanner Phase Transition</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjItwmfieltOPLPf9CyZ7ApeHrPsMVJUMPplIFDrttpPp0pCWCIv6AXImx7GFAJ5qq8Orr5SSsNLcVfcAy3TznXQixDmrFe3OcZFk7hWFOtee9xaINta6oNwawfaDuTPcVbCC4dnWX8JSU/s1600/1N341428720EFFAUJ3P0683R0M1-BR.JPG&quot; imageanchor=&quot;1&quot; style=&quot;clear: right; float: right; margin-bottom: 1em; margin-left: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;320&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjItwmfieltOPLPf9CyZ7ApeHrPsMVJUMPplIFDrttpPp0pCWCIv6AXImx7GFAJ5qq8Orr5SSsNLcVfcAy3TznXQixDmrFe3OcZFk7hWFOtee9xaINta6oNwawfaDuTPcVbCC4dnWX8JSU/s320/1N341428720EFFAUJ3P0683R0M1-BR.JPG&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;Of interest to this blog on Robust modeling here is an excerpt from &lt;a href=&quot;http://nuit-blanche.blogspot.com/2010/10/cs-cs-ecg-parallelcamp-une-competition.html&quot;&gt;Nuit Blanche&lt;/a&gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;blockquote&gt;Similarly, &lt;a href=&quot;http://twitter.com/sergecell/status/28865450036&quot;&gt;Sergey points me&lt;/a&gt;  to this arxiv preprint which made a passing reference to CS:&lt;a href=&quot;http://arxiv.org/pdf/1010.5445v1&quot;&gt; Theory and Applications of  Robust Optimization&lt;/a&gt; by &lt;a href=&quot;http://www.mit.edu/%7Edbertsim/&quot;&gt;Dimitris  Bertsimas&lt;/a&gt;, &lt;a href=&quot;http://faculty.fuqua.duke.edu/%7Edbbrown/bio/index.html&quot;&gt;David B.  Brown&lt;/a&gt;, &lt;a href=&quot;http://users.ece.utexas.edu/%7Ecmcaram/constantine_caramanis/Home.html&quot;&gt;Constantine  Caramanis&lt;/a&gt;. The abstract reads:&lt;/blockquote&gt;&lt;/div&gt;&lt;blockquote&gt;&lt;blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;In this paper we survey the primary  research, both theoretical and applied, in the area of Robust  Optimization (RO). Our focus is on the computational attractiveness of  RO approaches, as well as the modeling power and broad applicability of  the methodology. In addition to surveying prominent theoretical results  of RO, we also present some recent results linking RO to adaptable  models for multi-stage decision-making problems. Finally, we highlight  applications of RO across a wide spectrum of domains, including finance,  statistics, learning, and various areas of engineering. &lt;/div&gt;&lt;/blockquote&gt;Reading  the paper yields to this other paper I had mentioned back in April:&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;http://arxiv.org/abs/0811.1790v1&quot;&gt;Robust Regression and  Lasso&lt;/a&gt;  by &lt;a href=&quot;http://arxiv.org/find/cs,math/1/au:+Xu_H/0/1/0/all/0/1&quot; style=&quot;text-decoration: none;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;color: black;&quot;&gt;Huan Xu&lt;/span&gt;&lt;/a&gt;,&amp;nbsp;&lt;a href=&quot;http://arxiv.org/find/cs,math/1/au:+Caramanis_C/0/1/0/all/0/1&quot; style=&quot;text-decoration: none;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;color: black;&quot;&gt;Constantine Caramanis&lt;/span&gt;&lt;/a&gt;,&amp;nbsp;&lt;a href=&quot;http://arxiv.org/find/cs,math/1/au:+Mannor_S/0/1/0/all/0/1&quot; style=&quot;text-decoration: none;&quot;&gt;&lt;span class=&quot;Apple-style-span&quot; style=&quot;color: black;&quot;&gt;Shie Mannor&lt;/span&gt;&lt;/a&gt; &lt;/li&gt;
&lt;/ul&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;which makes a statement about &lt;i&gt;Robust  Linear Regression&lt;/i&gt; which in our world translates into &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/12/cs-wishlist-for-photography-robust-pca.html&quot;&gt;&lt;i&gt;multiplicative  noise&lt;/i&gt;&lt;/a&gt;. More Rosetta Stone moments....In the meatnime, you might  also be interested in the NIPS 2010 Workshop, entitled &lt;a href=&quot;http://www.cs.utexas.edu/%7Esai/robustml/index.html&quot;&gt;Robust  Statistical learning (robustml):&amp;nbsp;&lt;/a&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjItwmfieltOPLPf9CyZ7ApeHrPsMVJUMPplIFDrttpPp0pCWCIv6AXImx7GFAJ5qq8Orr5SSsNLcVfcAy3TznXQixDmrFe3OcZFk7hWFOtee9xaINta6oNwawfaDuTPcVbCC4dnWX8JSU/s1600/1N341428720EFFAUJ3P0683R0M1-BR.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;br /&gt;
&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Many of these approaches are based on the fact that data are used to learn or fit models. In effect, most of the literature is focused on linear modeling. Quite a few interesting results have come out of these areas including what I have called the &lt;a href=&quot;http://goo.gl/CYAs&quot;&gt;Donoho-Tanner phase transition&lt;/a&gt;. I will come back to this subject in another blog entry.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Credit: &lt;a href=&quot;http://marsrover.nasa.gov/gallery/all/1/n/2402/1N341428720EFFAUJ3P0683R0M1.HTML&quot;&gt;NASA&lt;/a&gt;.&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/5974633014225793191/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/5974633014225793191' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5974633014225793191'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5974633014225793191'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/10/robust-optimization-and-donoho-tanner.html' title='Robust Optimization and the Donoho-Tanner Phase Transition'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjItwmfieltOPLPf9CyZ7ApeHrPsMVJUMPplIFDrttpPp0pCWCIv6AXImx7GFAJ5qq8Orr5SSsNLcVfcAy3TznXQixDmrFe3OcZFk7hWFOtee9xaINta6oNwawfaDuTPcVbCC4dnWX8JSU/s72-c/1N341428720EFFAUJ3P0683R0M1-BR.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-4723004396445992035</id><published>2010-10-28T11:56:00.002+02:00</published><updated>2010-11-02T13:34:51.820+01:00</updated><title type='text'>Call for Help / Bleg:  Seeking Technical Areas Where Modeling Is Difficult.</title><content type='html'>&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqNf1sHtar0Pg5Xya0RC4_7XVbdl86lhNTQ9-BQdBvEc9Z78bNydV4sFBS3tX1gTZaaZVjM2ApSH2QcU_HpxqmVTUpFXESjMm-RMYDmYYdNqQvnDzGH0TFYUD1spIADMWSPgFsig-BfIU/s1600/soho-eit.jpg&quot; imageanchor=&quot;1&quot; style=&quot;clear: left; float: left; margin-bottom: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;320&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqNf1sHtar0Pg5Xya0RC4_7XVbdl86lhNTQ9-BQdBvEc9Z78bNydV4sFBS3tX1gTZaaZVjM2ApSH2QcU_HpxqmVTUpFXESjMm-RMYDmYYdNqQvnDzGH0TFYUD1spIADMWSPgFsig-BfIU/s320/soho-eit.jpg&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;While the &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2010/10/scm-talks-electricity-production.html&quot;&gt;recent presentations at SCM&lt;/a&gt; were enlightening with regards to known problems that are hard&amp;nbsp; to model, I wonder if any of the readers have a specific knowledge in a certain subject area where modeling is difficult. Please contact me and we can probably run a Q&amp;amp;A on this blog. If you want to remain anonymous, because you are feeling uncertain about discussing the uncertainties of the modeling in your area, I can also anonymize the Q&amp;amp;A.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The number of readers of this blog is currently at about 80 but I expect it to grow as this issue of robust modeling keeps raising its ugly head in many different field of science and engineering.&amp;nbsp; Let us recall the areas where robust mathematical modeling might be beneficial:&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;ul&gt;&lt;div&gt;&lt;li&gt;The data are missing or corrupted ;&lt;/li&gt;
&lt;li&gt;The laws  describing the phenomena are not completely known ;&lt;/li&gt;
&lt;li&gt;The  objectives are multiple and contradictory.&lt;/li&gt;
&lt;li&gt;The computational chain has too many variables. &lt;/li&gt;
&lt;/div&gt;&lt;/ul&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;I just added the last one in view of the very interesting presentation by &lt;a href=&quot;http://www.irsn.fr/FR/Pages/home.aspx&quot;&gt;Giovanni Bruna&lt;/a&gt; on &lt;a href=&quot;http://goo.gl/nvMT&quot;&gt;the problematic of figuring  out how to extract meaningful information out of a set of experiments  and computations in the case of plutonium use in nuclear reactors&lt;/a&gt;.&amp;nbsp; &lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;In the meantime, I&#39;ll feature some of the problematic I have seen that never had an easy modeling answer.&amp;nbsp; &lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;P.S:A bleg is a beg on a blog :-)&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Credit photo: &lt;a href=&quot;http://sohowww.nascom.nasa.gov/data/realtime/eit_284/512/&quot;&gt;ESA, NASA&lt;/a&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/4723004396445992035/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/4723004396445992035' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/4723004396445992035'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/4723004396445992035'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/10/call-for-help-bleg-seeking-technical.html' title='Call for Help / Bleg:  Seeking Technical Areas Where Modeling Is Difficult.'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqNf1sHtar0Pg5Xya0RC4_7XVbdl86lhNTQ9-BQdBvEc9Z78bNydV4sFBS3tX1gTZaaZVjM2ApSH2QcU_HpxqmVTUpFXESjMm-RMYDmYYdNqQvnDzGH0TFYUD1spIADMWSPgFsig-BfIU/s72-c/soho-eit.jpg" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-8181355886483606659</id><published>2010-10-26T11:48:00.004+02:00</published><updated>2010-10-29T17:14:31.370+02:00</updated><title type='text'>SCM Talks: Electricity Production Management and the MOX Computational Chain</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;In a different direction than certain communities that are wondering if&amp;nbsp;&lt;a href=&quot;http://geomblog.blogspot.com/2010/10/on-outreach-to-applied-communities.html&quot;&gt;outreach to applied communities&lt;/a&gt;&amp;nbsp;is a good thing,&amp;nbsp;&lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;Bernard Beauzamy&lt;/a&gt;, a mathematician by trade and owner of &lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;SCM&lt;/a&gt;, hosted a small workshop last week on the limits of modelisation&amp;nbsp; (&lt;a href=&quot;http://scmsa.pagesperso-orange.fr/SCM_seminaire_2010_10_21.pdf&quot;&gt;&quot;Les limites de la modélisation&quot;&amp;nbsp;&lt;/a&gt; in French). The workshop featured a set of speakers who are specialists in their fields yet will present their domain expertise in light of how mathematical modeling help or did not help answer their complex issues. We are not talking about just some optimization function with several goals but rather some deeper questioning on how the modeling of reality and reality itself clash with each other.&amp;nbsp;While the presentation were in French, some of the slides do not need much translation if you are coming from English. Here is the list of talks with a link to the presentations:&lt;/div&gt;&lt;br /&gt;
&lt;blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;span style=&quot;font-family: &#39;Century Schoolbook&#39;; font-size: medium;&quot;&gt;&lt;span style=&quot;font-family: &#39;Century Schoolbook&#39;; font-size: small;&quot;&gt;9 h – 10 h : Dr. Riadh Zorgati, EdF R&amp;amp;D &lt;a href=&quot;http://goo.gl/mtVp&quot;&gt;: Le management de l&#39;énergie ; tentatives de modélisation : succès et échecs.&lt;/a&gt;&lt;br /&gt;
11 h – 12 h : Dr. France Wallet, &lt;a href=&quot;http://scmsa.pagesperso-orange.fr/Wallet_SCM_2010_10_21.pdf&quot;&gt;Evaluation des risques sanitaires et environnementaux, EdF, Service de Santé : Modélisation en santé-environnement : pièges et limites.&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
14 h – 15 h : M. Giovanni Bruna, Directeur adjoint, Direction de la Sûreté des Réacteurs, Institut de Radioprotection et de Sûreté Nucléaire &lt;a href=&quot;http://goo.gl/nvMT&quot;&gt;: Simulation-expérimentation : qui a raison ? L’expérience du combustible MOX.&lt;/a&gt;&lt;br /&gt;
16 h – 17 h : M. Xavier Roederer, Inspecteur Mission Contrôle Audit Inspection, Agence Nationale de l&#39;Habitat :&lt;a href=&quot;http://scmsa.pagesperso-orange.fr/Roederer_SCM_2010_10_21.pdf&quot;&gt; Peut-on prévoir sans modéliser ?&lt;/a&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;&lt;/blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj55X6hgxwtB7ud02cHvkmc-J9SUGydEb7RLCQmnSMJt0VsK4Ewr34K5t6OBq5JSNWi3Dr21Zqu4DGP5w1XUXTVRPxuWOiOUqkviCVR_IpNSMDwBPxi8BNknIN_JpNdi62XAo6jPQG2H0I/s1600/edf-daily-prob.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;232&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj55X6hgxwtB7ud02cHvkmc-J9SUGydEb7RLCQmnSMJt0VsK4Ewr34K5t6OBq5JSNWi3Dr21Zqu4DGP5w1XUXTVRPxuWOiOUqkviCVR_IpNSMDwBPxi8BNknIN_JpNdi62XAo6jPQG2H0I/s320/edf-daily-prob.JPG&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
I could attend only two of the talks: the first and the third. In the &lt;a href=&quot;http://goo.gl/mtVp&quot;&gt;first talk&lt;/a&gt;, &lt;a href=&quot;http://fr.linkedin.com/pub/riadh-zorgati/8/b43/aa3&quot;&gt;Riadh Zorgati&lt;/a&gt;&amp;nbsp;talked about &lt;a href=&quot;http://goo.gl/mtVp&quot;&gt;modeling as applied in the context electricity production&lt;/a&gt;. He did a great job of providing the different timescales and attendant need for algorithm simplification when it comes to planning/scheduling electricity production in France. Every power plant and hydraulic resources owned by EDF (the main utility in France) have different operating procedures and capabilities as respect to how they can produce power to the grid. Since electricity has to have a continuous equilibrium between production and consumption, an aspect of the scheduling involves computing the need of the country the day after given various input a day before. As it turns out the modeling could be very detailed, but it would lead to a prohibitive computational time to get an answer for the next day of planning (more than a day&#39;s worth). The modeling is simplified to a certain extent by resorting to greedy algorithms if I recall to enable quicker predictions. The presentation has much more in it but it was interesting to see that a set of good algorithms were clearly money makers for the utility.&lt;br /&gt;
&lt;br /&gt;
&lt;/div&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIoBsgYb98DtJbiy_s-3qmc0PoQFvOW1MwR-cmWyW__cwYxQxh11M8Gb5AIythXWXeYl36Z0ZdDtZPxRXUrlZ9t1SIsUViY3nhjjRm2qnK1m5Qg8f9LTbtZYSN3FXaKJLFIifQhtT_uD4/s1600/mox-loca-void-reactivity.JPG&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;256&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjIoBsgYb98DtJbiy_s-3qmc0PoQFvOW1MwR-cmWyW__cwYxQxh11M8Gb5AIythXWXeYl36Z0ZdDtZPxRXUrlZ9t1SIsUViY3nhjjRm2qnK1m5Qg8f9LTbtZYSN3FXaKJLFIifQhtT_uD4/s320/mox-loca-void-reactivity.JPG&quot; width=&quot;320&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;The third presentation was by &lt;a href=&quot;http://www.irsn.fr/FR/Pages/home.aspx&quot;&gt;Giovanni Bruna&lt;/a&gt;&amp;nbsp;who talked about &lt;a href=&quot;http://goo.gl/nvMT&quot;&gt;the problematic of figuring out how to extract meaningful information out of a set of experiments and computations in the case of plutonium use in nuclear reactors&lt;/a&gt;.He spent the better half of the presentation going through a nuclear engineering 101 class that featured a good introduction on the subject of plutonium use in nuclear reactors. Plutonium is a by-product of &amp;nbsp;the consumption of uranium in a nuclear reactor. In fact, after an 18 month cycle, more than 30 percent of the power of an original uranium rod is produced by the plutonium created in that time period. After some time in the core, the rod is retrieved so that it can be reprocessed yielding the issue of how plutonium can be reused in a material called &lt;a href=&quot;http://en.wikipedia.org/wiki/MOX_fuel&quot;&gt;MOX&lt;/a&gt; (at least in France, in the U.S. a policy of no reprocessing is the law of the land). It turns out that plutonium is different from uranium because of its high epithermal cross section yielding a harder spectrum than the one found with uranium. The conundrum faced by the safety folks resides in figuring out how the current measurements and attendant extrapolation to power levels can be done in confidence when replacing uranium by plutonium. The methods used with uranium have more than 40 years of history, with plutonium not so much. It turns out to be a &lt;a href=&quot;http://books.google.com/books?id=AhHoABHARAgC&amp;amp;pg=PA14&amp;amp;lpg=PA14&amp;amp;dq=void+reactivity+mox&amp;amp;source=bl&amp;amp;ots=S2HDb-SX-8&amp;amp;sig=9YNNbLfbblzVsMJiBY3qTXwSOog&amp;amp;hl=en&amp;amp;ei=w5fGTMiXOsLW4gbE7K3JDw&amp;amp;sa=X&amp;amp;oi=book_result&amp;amp;ct=result&amp;amp;resnum=1&amp;amp;ved=0CBIQ6AEwAA#v=onepage&amp;amp;q=void%20reactivity%20mox&amp;amp;f=false&quot;&gt;difficult endeavor&lt;/a&gt; that can only be managed with a constant investigation between well done experiments and a revision of the calculation processes and a heavy use of margins. This example is also fascinating because this type of exercise reveals all the assumptions built in the computational chain starting from the&amp;nbsp;cold subcritical assemblies&amp;nbsp;&lt;a href=&quot;http://mcnp-green.lanl.gov/&quot;&gt;Monte Carlo runs&lt;/a&gt;&amp;nbsp;all the way to the expected power level found in actual nuclear reactor cores. &amp;nbsp;It is a computational chain because the data from the experiment does not say anything directly about the actual variable of interest (here the power level). As opposed to &lt;a href=&quot;http://fr.linkedin.com/pub/riadh-zorgati/8/b43/aa3&quot;&gt;Riadh&lt;/a&gt;&#39;s talk, the focus here is to make sure that the mathematical modeling is robust to changes in assumptions on the physics of the system.&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;
&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Thanks&amp;nbsp;&lt;span class=&quot;Apple-style-span&quot; style=&quot;font-family: &#39;Century Schoolbook&#39;;&quot;&gt;&lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;Bernard&lt;/a&gt;&lt;/span&gt;&amp;nbsp;for hosting the workshop, it was enlightening.&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/8181355886483606659/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/8181355886483606659' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8181355886483606659'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8181355886483606659'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/10/scm-talks-electricity-production.html' title='SCM Talks: Electricity Production Management and the MOX Computational Chain'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj55X6hgxwtB7ud02cHvkmc-J9SUGydEb7RLCQmnSMJt0VsK4Ewr34K5t6OBq5JSNWi3Dr21Zqu4DGP5w1XUXTVRPxuWOiOUqkviCVR_IpNSMDwBPxi8BNknIN_JpNdi62XAo6jPQG2H0I/s72-c/edf-daily-prob.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-5840158274560288152</id><published>2010-09-16T15:59:00.000+02:00</published><updated>2010-09-16T15:59:30.063+02:00</updated><title type='text'>Learning Functions in High Dimensions</title><content type='html'>&lt;div style=&quot;text-align: justify;&quot;&gt;Ever since &lt;a href=&quot;http://scmsa.pagesperso-orange.fr/&quot;&gt;Bernard Beauzamy&lt;/a&gt; asked me the question on the sampling needed to determine a function, the question has gotten stuck in the back of my mind. Bernard has devised the &lt;a href=&quot;http://nuit-blanche.blogspot.com/2007/12/monday-morning-algorithm-part-5-1-d.html&quot;&gt;Experimental Probabilistic Hypersurface (EPH)&lt;/a&gt; but some other more theoretical development have popped&amp;nbsp; up in the past two years. Here is a list of the different papers and links to &lt;a href=&quot;http://nuit-blanche.blogspot.com/&quot;&gt;Nuit Blanche&lt;/a&gt; (a blog mostly focused on Compressed Sensing) that try to provide an answer to the subject.&amp;nbsp; Without further due, here is the list:&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://arxiv.org/PS_cache/arxiv/pdf/1008/1008.3043v1.pdf&quot;&gt;Learning Functions of Few Arbitrary Linear Parameters in High Dimensions&lt;/a&gt; by &lt;a href=&quot;http://www.ricam.oeaw.ac.at/people/page/fornasier/&quot;&gt;Massimo Fornasier&lt;/a&gt;, &lt;a href=&quot;http://www.ricam.oeaw.ac.at/people/page.cgi?firstn=Karin;lastn=Schnass&quot;&gt;Karin Schnass&lt;/a&gt;, &lt;a href=&quot;http://people.ricam.oeaw.ac.at/j.vybiral/&quot;&gt;Jan Vybíral&lt;/a&gt;. The abstract reads:&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;Let us assume that $f$ is a continuous function defined on the unit ball of $\mathbb R^d$, of the form $f(x) = g (A x)$, where $A$ is a $k \times d$ matrix and $g$ is a function of $k$ variables for $k \ll d$. We are given a budget $m \in \mathbb N$ of possible point evaluations $f(x_i)$, $i=1,...,m$, of $f$, which we are allowed to query in order to construct a uniform approximating function. Under certain smoothness and variation assumptions on the function $g$, and an {\it arbitrary} choice of the matrix $A$, we present in this paper 1. a sampling choice of the points $\{x_i\}$ drawn at random for each function approximation; 2. an algorithm for computing the approximating function, whose complexity is at most polynomial in the dimension $d$ and in the number $m$ of points. Due to the arbitrariness of $A$, the choice of the sampling points will be according to suitable random distributions and our results hold with overwhelming probability. Our approach uses tools taken from the {\it compressed sensing} framework, recent Chernoff bounds for sums of positive-semidefinite matrices, and classical stability bounds for invariant subspaces of singular value decompositions.&lt;/blockquote&gt;&lt;/div&gt;From this &lt;a href=&quot;http://nuit-blanche.blogspot.com/2010/05/cs-presentations-of-workshop-on.html&quot;&gt;entry&lt;/a&gt;&lt;br /&gt;
&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;http://www.google.com/search?q=Ronald+Devore&quot;&gt;Ronald Devore&lt;/a&gt; (University of South Carolina, Columbia, USA)&lt;br /&gt;
&lt;b&gt;&lt;a href=&quot;http://perso-math.univ-mlv.fr/users/banach/workshop2010/talks/Devore.pdf&quot;&gt;Approximating   and Querying Functions in High Dimensions&lt;/a&gt;&lt;/b&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;br /&gt;
From this &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/08/cs-sparsity-in-random-matrix-theory.html&quot;&gt;entry:&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;Following up on some elements shown in the third &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/06/cs-bayesian-cs-via-belief-propagation.html&quot;&gt;Paris Lecture of Ron DeVore&lt;/a&gt;, here is the more substantive paper:&lt;a href=&quot;http://www.mimuw.edu.pl/%7Ewojtaszczyk/Publikacje/DPWHDsubmission2.pdf&quot;&gt; Approximation of functions of few variables in high dimensions&lt;/a&gt; by &lt;a href=&quot;http://www.math.tamu.edu/%7Erdevore/&quot;&gt;Ron DeVore&lt;/a&gt;, &lt;a href=&quot;http://www.math.tamu.edu/%7Egpetrova/&quot;&gt;Guergana Petrova&lt;/a&gt;, &lt;a href=&quot;http://www.mimuw.edu.pl/%7Ewojtaszczyk/&quot;&gt;Przemyslaw Wojtaszczyk&lt;/a&gt;. The abstract reads:&lt;/div&gt;&lt;br /&gt;
&lt;blockquote&gt;Let f be a continuous function defined on \Omega:= [0, 1]^N which depends on only l coordinate variables, f(x1, . . . , xN) = g(xi1 , . . . , xi` ). We assume that we are given m and are allowed to ask for the values of f at m points in \Omega. If g is in Lip1 and the coordinates i_1, . . . , i_l are known to us, then by asking for the values of f at m = L^l uniformly spaced points, we could recover f to the accuracy |g|Lip1L−1 in the norm of C(). This paper studies whether we can obtain similar results when the coordinates i_1, . . . , i_l are not known to us. A prototypical result of this paper is that by asking for C(l)L^l(log2 N) adaptively chosen point values of f, we can recover f in the uniform norm to accuracy |g|Lip1L−1 when g 2 Lip1. Similar results are proven for more general smoothness conditions on g. Results are also proven under the assumption that f can be approximated to some tolerance \epsilon (which is not known) by functions of l variables.&lt;/blockquote&gt;I note that the authors make a connection to the &lt;a href=&quot;http://rjlipton.wordpress.com/2009/06/04/the-junta-problem/&quot;&gt;Junta&#39;s problem as discussed by Dick Lipton&lt;/a&gt; recently and mentioned &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/06/cs-junta-problem-nesta-non-iterative.html&quot;&gt;here&lt;/a&gt;.&lt;br /&gt;
&lt;br /&gt;
From this &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/06/cs-l1-homotopy-capturing-functions-in.html&quot;&gt;entry&lt;/a&gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;a href=&quot;http://www.ann.jussieu.fr/%7Ecohen/&quot;&gt;Albert Cohen&lt;/a&gt; just released the 4th course notes of &lt;a href=&quot;http://www.math.tamu.edu/%7Erdevore/&quot;&gt;Ron DeVore&lt;/a&gt;&#39;s 4th lecture in Paris entitled &lt;a href=&quot;http://www.ann.jussieu.fr/%7Ecohen//devore4.pdf&quot;&gt;Capturing Functions in Infinite Dimensions&lt;/a&gt; ( the third lecture is &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/06/cs-bayesian-cs-via-belief-propagation.html&quot;&gt;here&lt;/a&gt;, and the first and second are &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/06/cs-foundations-of-cs-data-stream.html&quot;&gt;here&lt;/a&gt;), The abstract reads:&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;The following are notes on stochastic and parametric PDEs of the short course in Paris. Lecture 4: Capturing Functions in Infinite Dimensions. Finally, we want to give an example where the problem is to recover a function of infinitely many variables. We will first show how such problems occur in the context of stochastic partial differential equations.&lt;/blockquote&gt;&lt;/div&gt;From here:&lt;br /&gt;
&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;The Course Notes of the third lecture given by &lt;a href=&quot;http://www.math.tamu.edu/%7Erdevore/&quot;&gt;Ron DeVore&lt;/a&gt; in Paris has been released on &lt;a href=&quot;http://www.ann.jussieu.fr/%7Ecohen/&quot;&gt;Albert Cohen&lt;/a&gt;&#39;s page. It features &quot;&lt;a href=&quot;http://www.ann.jussieu.fr/%7Ecohen/devore3.pdf&quot;&gt;Capturing Functions in High Dimensions&lt;/a&gt;&quot; and seems to aim at giving bounds for nonlinear compressed sensing and should have an impact in manifold signal processing. Interesting. The first two lectures were mentioned &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/06/cs-foundations-of-cs-data-stream.html&quot;&gt;here&lt;/a&gt;. The beginning of the lecture starts with&lt;/div&gt;&lt;br /&gt;
&lt;blockquote&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;1.1 Classifying High Dimensional Functions:&lt;/div&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;Our last two lectures will study the problem of approximating (or capturing through queries) a function f defined on ⊂ R^N with N very large. The usual way of classifying functions is by smoothness. The more derivatives a function has the nicer it is and the more efficiently it can be numerically approximated. However, as we move into high space dimension, this type of classification will suffer from the so-called curse of dimensionality which we shall now quantify&lt;/div&gt;&lt;/blockquote&gt;From &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/06/cs-bayesian-cs-via-belief-propagation.html&quot;&gt;here&lt;/a&gt;:&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;a href=&quot;http://www.ann.jussieu.fr/%7Ecohen/&quot;&gt;Albert Cohen&lt;/a&gt; just released &lt;a href=&quot;http://www.math.tamu.edu/%7Erdevore/&quot;&gt;Ron Devore&lt;/a&gt;&#39;s lecture notes he is presenting in Paris entitled: &lt;a href=&quot;http://www.ann.jussieu.fr/%7Ecohen//devore1and2.pdf&quot;&gt;Foundations of Compressed Sensing&lt;/a&gt;.&lt;/div&gt;I&lt;br /&gt;
From &lt;a href=&quot;http://nuit-blanche.blogspot.com/2008/11/cs-multi-manifold-data-modeling-and.html&quot;&gt;here:&lt;/a&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;span class=&quot;red strong&quot; style=&quot;font-weight: bold;&quot;&gt;&lt;a href=&quot;http://www.math.tamu.edu/%7Erdevore/&quot;&gt;* Ronald  DeVore&lt;/a&gt;&lt;/span&gt;:&lt;span class=&quot;strong&quot; style=&quot;font-weight: bold;&quot;&gt; Recovering sparsity in high dimensions&lt;/span&gt;&lt;br /&gt;
&lt;span style=&quot;color: #666666;&quot;&gt;&lt;/span&gt; &lt;br /&gt;
&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;http://www.ima.umn.edu/2008-2009/SW10.27-30.08/activities/DeVore-Ronald/abstract.pdf&quot;&gt;Abstract (pdf)&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;http://www.ima.umn.edu/videos/?id=565&quot;&gt;Video (flv)&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;The abstract reads:&lt;br /&gt;
&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;blockquote&gt;We assume that we are in $R^N$ with $N$ large. The first problem we consider is that there is a function $f$ defined on $Omega:=[0,1]^N$ which is a function of just $k$ of the coordinate variables: $f(x_1,dots,x_N)=g(x_{j_1},dots,x_{j_k})$ where $j_1,dots,j_k$ are not known to us. We want to approximate $f$ from some of its point values. We first assume that we are allowed to choose a collection of points in $Omega$ and ask for the values of $f$ at these points. We are interested in what error we can achieve in terms of the number of points when we assume some smoothness of $g$ in the form of Lipschitz or higher smoothness conditions.&lt;br /&gt;
We shall consider two settings: adaptive and non-adaptive. In the adaptive setting, we are allowed to ask for a value of $f$ and then on the basis of the answer we get we can ask for another value. In the non-adaptive setting, we must prescribe the $m$ points in advance.A second problem we shall consider is when $f$ is not necessarily only a function of $k$ variables but it can be approximated to some tolerance $epsilon$ by such a function. We seek again sets of points where the knowledge of the values of $f$ at such points will allow us to approximate $f$ well. Our main consideration is to derive results which are not severely effected by the size of $N$, i.e. are not victim of the curse of dimensionality. We shall see that this is possible.&lt;/blockquote&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/5840158274560288152/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/5840158274560288152' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5840158274560288152'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/5840158274560288152'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2010/09/learning-functions-in-high-dimensions.html' title='Learning Functions in High Dimensions'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-2211733055854567924</id><published>2009-07-09T12:20:00.007+02:00</published><updated>2009-07-09T22:39:37.613+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="compressedsensing"/><category scheme="http://www.blogger.com/atom/ns#" term="dimensionalityreduction"/><category scheme="http://www.blogger.com/atom/ns#" term="eph"/><category scheme="http://www.blogger.com/atom/ns#" term="grouptesting"/><title type='text'>High Throughput Testing and TCS meets EDA</title><content type='html'>&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhd7PL8iJICy0yx_snID7TSbFXa6WvbW45f5bjTp64QcvI7ZSg8Ua-9nkrW69zrF1sQ7PaDanVWoO-DrG2iYyIUYLy0okskoez8EDBeOwvawHCRcWD-1tuX3SNlXv8pNNeRoKN8t_-1cRQ/s1600-h/list-parameters-eph-cathare.JPG&quot;&gt;&lt;img style=&quot;margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 268px; height: 320px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhd7PL8iJICy0yx_snID7TSbFXa6WvbW45f5bjTp64QcvI7ZSg8Ua-9nkrW69zrF1sQ7PaDanVWoO-DrG2iYyIUYLy0okskoez8EDBeOwvawHCRcWD-1tuX3SNlXv8pNNeRoKN8t_-1cRQ/s320/list-parameters-eph-cathare.JPG&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5356422831963976338&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;Several items caught my interest this week:&lt;br /&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;The latest document used by the french minister in charge of Research that is set to develop axes of research of national interest &lt;a href=&quot;http://sciences.blogs.liberation.fr/home/2009/07/v-p%C3%A9cresse-d%C3%A9voile-le-rapport-snri.html&quot;&gt;include&lt;/a&gt;:&lt;br /&gt;&lt;br /&gt;&lt;blockquote&gt;&quot;..Premier axe : santé, bien-être, alimentation et biotechnologies (&lt;span style=&quot;font-weight: bold;&quot;&gt;analyses biologiques à haut débit&lt;/span&gt;, nanobiotechnologies, cohortes suivies sur vingt ans, robots d’aide aux personnes dépendantes)...&quot;&lt;/blockquote&gt;&lt;br /&gt;&lt;br /&gt;The item of &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/2009/04/group-testing-and-compressive-sensing.html&quot;&gt;high throughput testing&lt;/a&gt; seems to get some traction at the policy level. While we are on the subject of high throughput testing, here is a recent arxiv preprint on the subject of compressive sensing and group testing: &lt;a href=&quot;http://arxiv.org/PS_cache/arxiv/pdf/0907/0907.1061v1.pdf&quot;&gt; Boolean Compressed Sensing and Noisy Group Testing&lt;/a&gt; by &lt;a href=&quot;http://people.bu.edu/geokamal/Site/Welcome.html&quot;&gt;George Atia&lt;/a&gt;, &lt;a href=&quot;http://iss.bu.edu/srv/&quot;&gt;Venkatesh Saligrama&lt;/a&gt; (I featured it &lt;a href=&quot;http://nuit-blanche.blogspot.com/2009/07/cs-stuff-boolean-cs-cs-limts-in-lp.html&quot;&gt;here&lt;/a&gt;).&lt;br /&gt;&lt;br /&gt;The second item of interest is more general and concerns the view of theoretical people to applied problems. The US NSF recently had a workshop on Design Automation and Theory/Electonic Design Automation that drew theoretical researchers to look into the problems of chip engineering. Both &lt;a href=&quot;http://www.cc.gatech.edu/directory/richard-lipton/&quot;&gt;Dick Lipton&lt;/a&gt; and &lt;a href=&quot;http://apollonius.cs.utah.edu/web/&quot;&gt;Suresh Venkatasubramanian&lt;/a&gt; are well known researchers in the area of Theoretical Computer Science (TCS) and blogged about this in their respective blog:&lt;br /&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;http://www.blogger.com/%20http://rjlipton.wordpress.com/2009/07/08/nsf-workshop-on-design-automation-and-theory/&quot;&gt;Dick Lipton&#39;s entry is entitled: NSF Workshop On Design Automation and Theory&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;http://geomblog.blogspot.com/2009/07/nsf-workshop-electonic-design.html&quot;&gt;Suresh Venkatasubramanian&#39;s entry is entitled: NSF Workshop: Electonic Design Automation&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;br /&gt;In &lt;a href=&quot;http://www.blogger.com/%20http://geomblog.blogspot.com/2009/07/nsf-workshop-electonic-design.html&quot;&gt;Suresh&lt;/a&gt;&#39;s entry, one can read:&lt;br /&gt;&lt;br /&gt;&lt;blockquote&gt;A second thought was how the lessons of massive data analysis might be useful in the realm of DA. One speakr described one critical problem as being the degree of complexity associated with current DA tools: there are over 4000 &quot;knobs&quot; to turn in one such tool ! It&#39;s believed that these knobs are not independent, and might even be contradictory. If we think of each &quot;run&quot; of the DA tool, outputing some kind of chip layout, as a point in this 4000+ dimensional space, I wonder whether techniques for dimensionality reduction and manifold analysis might be useful to find a set of &quot;core knobs&quot; that control the process.&lt;/blockquote&gt;&lt;br /&gt;&lt;br /&gt;Let us note the issue of engineers having to deal with &quot;4000 knobs,&quot; is an issue eerily similar to what the &lt;a href=&quot;http://pagesperso-orange.fr/scmsa/RMM/RMM_EPH.htm&quot;&gt;Experimental Probabilistic Hypersurface (EPH)&lt;/a&gt; seems to be solving for &lt;a href=&quot;http://pagesperso-orange.fr/scmsa/RMM/IRSN_SCMSA_EPH3.pdf&quot;&gt;thermal hydraulics codes used in nuclear reactor simulations&lt;/a&gt;. I note that he has a similar view to mine about performing dimensionality reduction. An issue, that affects also the EPH, is setting up the right metrics between the &quot;knobs&quot;.&lt;br /&gt;&lt;br /&gt;Table: Parameters of the Cathare code as used in the EPH. From L&#39;Hypersurface Probabiliste Construction explicite à partir du Code Cathare&quot; by Olga Zeydina,&lt;br /&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/2211733055854567924/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/2211733055854567924' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/2211733055854567924'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/2211733055854567924'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2009/07/high-throughput-testing-and-tcs-meets.html' title='High Throughput Testing and TCS meets EDA'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhd7PL8iJICy0yx_snID7TSbFXa6WvbW45f5bjTp64QcvI7ZSg8Ua-9nkrW69zrF1sQ7PaDanVWoO-DrG2iYyIUYLy0okskoez8EDBeOwvawHCRcWD-1tuX3SNlXv8pNNeRoKN8t_-1cRQ/s72-c/list-parameters-eph-cathare.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-6353832903279146722</id><published>2009-04-16T21:39:00.001+02:00</published><updated>2009-04-16T21:39:00.854+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="compressedsensing"/><category scheme="http://www.blogger.com/atom/ns#" term="grouptesting"/><title type='text'>Group Testing and Compressive Sensing</title><content type='html'>&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxtXUj4lsQF3ihLefyAyBysngylKKDdEkWDHbq6sQnNEC8jxT6axjQvT62NqzZXe7s2NW60hJO4J0byOAOwiALauDngC_3gf-cdPHq9W1lLOALPr_wEJvIFaQ4G8n6K57VuDQfunXsU2I/s1600-h/annahts-1.JPG&quot;&gt;&lt;img style=&quot;margin: 0pt 0pt 10px 10px; float: right; cursor: pointer; width: 320px; height: 205px;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxtXUj4lsQF3ihLefyAyBysngylKKDdEkWDHbq6sQnNEC8jxT6axjQvT62NqzZXe7s2NW60hJO4J0byOAOwiALauDngC_3gf-cdPHq9W1lLOALPr_wEJvIFaQ4G8n6K57VuDQfunXsU2I/s320/annahts-1.JPG&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5323912115029758050&quot; border=&quot;0&quot; /&gt;&lt;/a&gt;As &lt;a href=&quot;http://pagesperso-orange.fr/scmsa/&quot;&gt;Bernard Beauzamy&lt;/a&gt; will present a paper at the Aerospace Testing, Design and Manufacturing 2009 Seminar in Munich, I was reminded that he refers to a testing procedure called &quot;group testing&quot; as a solution for companies to comply with the &lt;a href=&quot;http://ec.europa.eu/environment/chemicals/reach/reach_intro.htm&quot;&gt;E.U. REACH regulations&lt;/a&gt; without incurring major financial losses.&lt;br /&gt;&lt;div style=&quot;text-align: justify;&quot;&gt;&lt;br /&gt;There is a clear connection between group testing and compressive sensing, a subject area for which I perform some type of Technology Watch on a blog: &lt;a href=&quot;http://nuit-blanche.blogspot.com/&quot; target=&quot;_blank&quot;&gt;http://nuit-blanche.blogspot.&lt;wbr&gt;com&lt;/a&gt; . More information can be found in either the &lt;a href=&quot;http://www.dsp.ece.rice.edu/cs&quot;&gt;repository at Rice University&lt;/a&gt; or in the &lt;a href=&quot;http://igorcarron.googlepages.com/cs&quot;&gt;Compressive Sensing Big Picture page&lt;/a&gt;.&lt;br /&gt;&lt;br /&gt;Initially, &lt;a href=&quot;http://www.dms.auburn.edu/%7Erodgec1/cadcom/applications/hwansnap/hwansnap.html&quot;&gt;Group testing&lt;/a&gt; was used extensively by the U.S. Army in the 1940&#39;s in order to screen for syphilis in their troops. The procedure enabled enormous savings since the detection kits were pricey at the time. One can read the first few paragraphs of this report by &lt;a href=&quot;http://www.acm.caltech.edu/%7Ejtropp/&quot;&gt;Joel Tropp&lt;/a&gt; and &lt;a href=&quot;http://www.math.lsa.umich.edu/%7Eannacg/&quot;&gt;Anna Gilbert&lt;/a&gt; on the connection between compressive sensing and group testing in &lt;a href=&quot;http://www.acm.caltech.edu/techreports/Caltech_ACM_TR_2007_01.pdf&quot;&gt;Signal Recovery From Random Measurements via Orthogonal Matching Pursuit: The Gaussian Case&lt;/a&gt;. Also of interest is the video by &lt;a href=&quot;http://www.math.lsa.umich.edu/%7Eannacg/&quot;&gt;Anna Gilbert&lt;/a&gt; who presented their latest results using this technique (CS + group testing) on bio chips called &lt;a href=&quot;http://www.mun.ca/biology/scarr/DNA_Chips.html&quot;&gt;SNP&lt;/a&gt;s (in this experiment they do not use Gaussian matrices but rather sparser constructs called &lt;a href=&quot;http://groups.csail.mit.edu/toc/sparse/wiki/index.php?title=Sparse_Recovery_Experiments&quot;&gt;Expander Graphs&lt;/a&gt;). I pointed to it &lt;a href=&quot;http://nuit-blanche.blogspot.com/2008/12/cs-group-testing-in-biology-tiny-masks.html&quot;&gt;before&lt;/a&gt; and the &lt;a href=&quot;http://intractability.princeton.edu/videos/stream/videoplay.html?videofile=cs/gw2008-500kbps/Gilbert.mp4&quot;&gt; video is here.&lt;/a&gt; While current results are OK for the time being, it may be difficult for this type of solution to convert the folks in pure bio research unless there is a tremendous savings for the detection of a certain disease. On the other hand, if there is an important financial gain and this testing procedure is Ok&#39;ed by the proper regulation authorities, there is a likely chance that commercial companies could adopt this type of technique faster.&lt;br /&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/6353832903279146722/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/6353832903279146722' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/6353832903279146722'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/6353832903279146722'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2009/04/group-testing-and-compressive-sensing.html' title='Group Testing and Compressive Sensing'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxtXUj4lsQF3ihLefyAyBysngylKKDdEkWDHbq6sQnNEC8jxT6axjQvT62NqzZXe7s2NW60hJO4J0byOAOwiALauDngC_3gf-cdPHq9W1lLOALPr_wEJvIFaQ4G8n6K57VuDQfunXsU2I/s72-c/annahts-1.JPG" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-319304714858376628.post-8799920780781517561</id><published>2008-07-03T18:08:00.007+02:00</published><updated>2008-07-08T12:10:35.221+02:00</updated><category scheme="http://www.blogger.com/atom/ns#" term="site instructions"/><category scheme="http://www.blogger.com/atom/ns#" term="what is RMM?"/><title type='text'>Welcome to the RMM blog</title><content type='html'>This blog is aimed at Engineers and Applied Mathematicians interested in developing tools (both algorithms and software) which can handle, at the conceptual level, the three difficulties that are usually met in any real world project :&lt;br /&gt;&lt;br /&gt;&lt;li&gt; The data are missing or corrupted ;&lt;/li&gt;&lt;br /&gt;&lt;li&gt; The laws describing the phenomena are not completely known ;&lt;/li&gt;&lt;br /&gt;&lt;li&gt; The objectives are multiple and contradictory.&lt;/li&gt;&lt;br /&gt;&lt;br /&gt;The RMM program site is hosted at the &lt;a href=&quot;http://pagesperso-orange.fr/scmsa/robust.htm&quot;&gt;SCM site&lt;/a&gt;. Bernard Beauzamy has devised a list of the different aspects of the &lt;a href=&quot;http://pagesperso-orange.fr/scmsa/RMM/RMM_general.htm&quot;&gt;on-going program&lt;/a&gt;. There are as follows:&lt;br /&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/robust1.htm&quot;&gt;What is a model ?&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/robust2.htm&quot;&gt;Mathematical description of the objectives&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/robust3.htm&quot;&gt;The basic rule of real-life mathematics&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/robust4.htm&quot;&gt;Can we model everyday life preoccupations ?&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/probabilistic_methods.pdf&quot;&gt;Probabilistic methods&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/RMM_four_steps.htm&quot;&gt;The four steps of an RMM program&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/RMM_members.htm&quot;&gt;List of participating people and institutions&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/RMM_ongoing.htm&quot;&gt;On going events&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;li&gt;&lt;a href=&quot;http://www.scmsa.com/RMM/RMM_download.htm&quot;&gt;RMM Library&lt;/a&gt;&lt;/li&gt;&lt;br /&gt;&lt;br /&gt;Any new items on these pages will also be featured on this blog.&lt;br /&gt;&lt;br /&gt;How can you find out if there is something new on the blog ? You can either:&lt;br /&gt;&lt;br /&gt;&lt;li&gt;Come to the &lt;a href=&quot;http://robustmathematicalmodeling.blogspot.com/&quot;&gt;site&lt;/a&gt;, or&lt;/li&gt;&lt;br /&gt;&lt;li&gt;Subscribe to this feed (see right column), or&lt;/li&gt;&lt;br /&gt;&lt;li&gt;you subscribe to the blog by e-mail (on the top right hand side column) and receive new posts/entries directly in your E-mail box.&lt;/li&gt;&lt;br /&gt;Do you want to contribute ? You can guest blog or send me some information you want to feature. If you want to post an article to the blog and want to post under your name, you will be asked to obtain a Google account. If you do not know or do not want to go through the hassle of going through this process, I can also post the entry directly into the blog with your name.&lt;br /&gt;&lt;br /&gt;On a different note, &lt;a href=&quot;http://www.linkedin.com/in/wvanackooij&quot;&gt;Wim van Ackooij&lt;/a&gt; and &lt;a href=&quot;http://www.linkedin.com/in/igorcarron&quot;&gt;myself&lt;/a&gt; have created a &lt;a href=&quot;http://www.linkedin.com/&quot;&gt;LinkedIn&lt;/a&gt; group devoted to the RMM program.&lt;a onblur=&quot;try {parent.deselectBloggerImageGracefully();} catch(e) {}&quot; href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivmm0U8I4LO1HlCnZnM041ko5Wx6Z_4sQnGsn2YVMSxhcwet1Rrn9l42vaM0iW5G48DMlq-xQ396IsSYN2fqP1KN0AFsdkgciwUmo1IEtEyg4_d-DBCkr4YcDoZO1NN3XbUhRb_saJC6o/s1600-h/logo_rmm.jpg&quot;&gt;&lt;img style=&quot;margin: 0pt 0pt 10px 10px; float: right; cursor: pointer;&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivmm0U8I4LO1HlCnZnM041ko5Wx6Z_4sQnGsn2YVMSxhcwet1Rrn9l42vaM0iW5G48DMlq-xQ396IsSYN2fqP1KN0AFsdkgciwUmo1IEtEyg4_d-DBCkr4YcDoZO1NN3XbUhRb_saJC6o/s320/logo_rmm.jpg&quot; alt=&quot;&quot; id=&quot;BLOGGER_PHOTO_ID_5219066371835984418&quot; border=&quot;0&quot; /&gt;&lt;/a&gt; If you are on &lt;a href=&quot;http://www.linkedin.com/&quot;&gt;LinkedIn&lt;/a&gt; you may want to join. In order to do so, please go to either &lt;a href=&quot;http://www.linkedin.com/in/wvanackooij&quot;&gt;Wim&lt;/a&gt; or &lt;a href=&quot;http://www.linkedin.com/in/igorcarron&quot;&gt;Igor&lt;/a&gt;&#39;s profile on &lt;a href=&quot;http://www.linkedin.com/&quot;&gt;LinkedIn&lt;/a&gt; and click on the RMM group logo to which they are currently affiliated.</content><link rel='replies' type='application/atom+xml' href='http://robustmathematicalmodeling.blogspot.com/feeds/8799920780781517561/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://www.blogger.com/comment/fullpage/post/319304714858376628/8799920780781517561' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8799920780781517561'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/319304714858376628/posts/default/8799920780781517561'/><link rel='alternate' type='text/html' href='http://robustmathematicalmodeling.blogspot.com/2008/07/welcome-to-rmm-blog.html' title='Welcome to the RMM blog'/><author><name>Igor</name><uri>http://www.blogger.com/profile/17474880327699002140</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='16' height='16' src='https://img1.blogblog.com/img/b16-rounded.gif'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEivmm0U8I4LO1HlCnZnM041ko5Wx6Z_4sQnGsn2YVMSxhcwet1Rrn9l42vaM0iW5G48DMlq-xQ396IsSYN2fqP1KN0AFsdkgciwUmo1IEtEyg4_d-DBCkr4YcDoZO1NN3XbUhRb_saJC6o/s72-c/logo_rmm.jpg" height="72" width="72"/><thr:total>0</thr:total></entry></feed>