<?xml version='1.0' encoding='UTF-8'?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><feed xmlns='http://www.w3.org/2005/Atom' xmlns:openSearch='http://a9.com/-/spec/opensearchrss/1.0/' xmlns:blogger='http://schemas.google.com/blogger/2008' xmlns:georss='http://www.georss.org/georss' xmlns:gd="http://schemas.google.com/g/2005" xmlns:thr='http://purl.org/syndication/thread/1.0'><id>tag:blogger.com,1999:blog-2715968472735546962</id><updated>2026-03-07T09:57:36.306+01:00</updated><title type='text'>Bannalia: trivial notes on themes diverse</title><subtitle type='html'></subtitle><link rel='http://schemas.google.com/g/2005#feed' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/posts/default'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default?redirect=false'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/'/><link rel='hub' href='http://pubsubhubbub.appspot.com/'/><link rel='next' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default?start-index=26&amp;max-results=25&amp;redirect=false'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><generator version='7.00' uri='http://www.blogger.com'>Blogger</generator><openSearch:totalResults>159</openSearch:totalResults><openSearch:startIndex>1</openSearch:startIndex><openSearch:itemsPerPage>25</openSearch:itemsPerPage><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-3395523721832873193</id><published>2025-12-20T20:43:00.003+01:00</published><updated>2025-12-22T10:40:02.062+01:00</updated><title type='text'>Boost.MultiIndex refactored</title><content type='html'>&lt;p&gt;&lt;a href=&quot;https://github.com/boostorg/multi_index&quot;&gt;Boost.MultiIndex&lt;/a&gt;&amp;nbsp;was launched as part of Boost 1.32 in November 2004. The library is still actively maintained and in use by some notable projects such as &lt;a href=&quot;https://github.com/bitcoin/bitcoin&quot;&gt;BitcoinCore&lt;/a&gt;,&amp;nbsp;&lt;a href=&quot;https://gitlab.cern.ch/atlas-tdaq-software&quot;&gt;CERN ATLAS&lt;/a&gt;,&amp;nbsp;&lt;a href=&quot;https://github.com/ClickHouse/ClickHouse&quot;&gt;ClickHouse&lt;/a&gt;,&amp;nbsp;&lt;a href=&quot;https://github.com/facebook/folly&quot;&gt;Folly&lt;/a&gt;&amp;nbsp;and &lt;a href=&quot;https://github.com/redpanda-data/redpanda&quot;&gt;Redpanda&lt;/a&gt;, to name a few.&lt;/p&gt;&lt;p&gt;Back in 2004, variadic templates and typelists were emulated in C++03 with the help of libraries like &lt;a href=&quot;https://github.com/boostorg/preprocessor&quot;&gt;Boost.Preprocessor&lt;/a&gt; and &lt;a href=&quot;https://github.com/boostorg/mpl&quot;&gt;Boost.MPL&lt;/a&gt;. These libraries were ground breaking at the time but they have been largely obsoleted by language features available since C++11. Given that Boost.MultiIndex is no longer usable in C++03 (some internal dependencies have moved in the last few years to requiring C++11 as a minimum), it was about time to give the library an upgrade.&lt;/p&gt;&lt;p&gt;Starting in Boost 1.91 (target date April 2026), all the internal machinery of Boost.MultiIndex dependent on&amp;nbsp;Boost.Preprocessor and&amp;nbsp;Boost.MPL is refactored to use C++11 variadic templates and &lt;a href=&quot;https://github.com/boostorg/mp11&quot;&gt;Boost.Mp11&lt;/a&gt;:&lt;/p&gt;&lt;ul&gt;&lt;li&gt;All type lists accepted or provided by the library (&lt;code&gt;&lt;/code&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;indexed_by&lt;/span&gt;,&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;tag&lt;/span&gt;, nested typedefs &lt;span style=&quot;font-family: courier;&quot;&gt;index_specifier_type_list&lt;/span&gt;, &lt;span style=&quot;font-family: courier;&quot;&gt;index_type_list&lt;/span&gt;, &lt;span style=&quot;font-family: courier;&quot;&gt;iterator_type_list&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;const_iterator_type_list&lt;/span&gt;) are no longer based on Boost.MPL but instead they are now&amp;nbsp;&lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/mp11/doc/html/mp11.html#definitions&quot;&gt;Boost.Mp11 lists&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;composite_key&lt;/span&gt; and associated class templates (&lt;span style=&quot;font-family: courier;&quot;&gt;composite_key_equal_to&lt;/span&gt;, &lt;span style=&quot;font-family: courier;&quot;&gt;composite_key_compare&lt;/span&gt;, &lt;span style=&quot;font-family: courier;&quot;&gt;composite_key_hash&lt;/span&gt;) have been made truly variadic (previously the maximum number of template arguments was limited by the macro&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;BOOST_MULTI_INDEX_LIMIT_COMPOSITE_KEY_SIZE&lt;/span&gt;).&amp;nbsp;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;The upgrade should be transparent to end users in the overwhelming majority of cases, although we discuss some potential backwards compatibility issues later.&lt;/p&gt;&lt;p&gt;&lt;a name=&quot;warning&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Reduction in lengths of type and symbol names&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Consider:&lt;/p&gt;
&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;using namespace boost::multi_index;

struct element
{
  int x, y;
};

using container = multi_index_container&amp;lt;
  element,
  indexed_by&amp;lt;
    random_access&amp;lt;tag&amp;lt;struct i0&amp;gt;&amp;gt;,
    ordered_unique&amp;lt;tag&amp;lt;struct i1&amp;gt;, key&amp;lt;&amp;amp;element::x, &amp;amp;element::y&amp;gt;&amp;gt;
  &amp;gt;
&amp;gt;;

container c;
auto&amp;amp;     idx = c.get&amp;lt;0&amp;gt;(); // first index of the container&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Prior to Boost 1.91, &lt;span style=&quot;font-family: courier;&quot;&gt;typeid(c).name()&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;typeid(idx).name()&lt;/span&gt;&amp;nbsp;were the following in Visual Studio (after formatting):&lt;/p&gt;&lt;div&gt;&lt;pre class=&quot;prettyprint&quot; style=&quot;font-size: 80%;&quot;&gt;class boost::multi_index::multi_index_container&amp;lt;
  struct element,
  struct boost::multi_index::indexed_by&amp;lt;
    struct boost::multi_index::random_access&amp;lt;
      struct boost::multi_index::tag&amp;lt;
        struct i0,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na
      &amp;gt;
    &amp;gt;,
    struct boost::multi_index::ordered_unique&amp;lt;
      struct boost::multi_index::tag&amp;lt;
        struct i1,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
        struct boost::mpl::na
      &amp;gt;,
      struct boost::multi_index::composite_key&amp;lt;
        struct element,
        struct boost::multi_index::member&amp;lt;struct element, int, 0&amp;gt;,
        struct boost::multi_index::member&amp;lt;struct element, int, 4&amp;gt;,
        struct boost::tuples::null_type, struct boost::tuples::null_type,
        struct boost::tuples::null_type, struct boost::tuples::null_type,
        struct boost::tuples::null_type, struct boost::tuples::null_type,
        struct boost::tuples::null_type, struct boost::tuples::null_type
      &amp;gt;,
      struct boost::mpl::na
    &amp;gt;,
    struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
    struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
    struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
    struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
    struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
    struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na
  &amp;gt;,
  class std::allocator&amp;lt;struct element&amp;gt;
&amp;gt;

class boost::multi_index::detail::random_access_index&amp;lt;
  struct boost::multi_index::detail::nth_layer&amp;lt;
    1,
    struct element,
    struct boost::multi_index::indexed_by&amp;lt;
      struct boost::multi_index::random_access&amp;lt;
        struct boost::multi_index::tag&amp;lt;
          struct i0,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na
        &amp;gt;
      &amp;gt;,
      struct boost::multi_index::ordered_unique&amp;lt;
        struct boost::multi_index::tag&amp;lt;
          struct i1,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
          struct boost::mpl::na
        &amp;gt;,
        struct boost::multi_index::composite_key&amp;lt;
          struct element,
          struct boost::multi_index::member&amp;lt;struct element, int, 0&amp;gt;,
          struct boost::multi_index::member&amp;lt;struct element, int, 4&amp;gt;,
          struct boost::tuples::null_type, struct boost::tuples::null_type,
          struct boost::tuples::null_type, struct boost::tuples::null_type,
          struct boost::tuples::null_type, struct boost::tuples::null_type,
          struct boost::tuples::null_type, struct boost::tuples::null_type
        &amp;gt;,
        struct boost::mpl::na
      &amp;gt;,
      struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
      struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
      struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
      struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
      struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na,
      struct boost::mpl::na, struct boost::mpl::na, struct boost::mpl::na
    &amp;gt;,
    class std::allocator&amp;lt;struct element&amp;gt;
  &amp;gt;,
  struct boost::mpl::vector1&amp;lt;struct i0&amp;gt;
&amp;gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Those many &lt;span style=&quot;font-family: courier;&quot;&gt;boost::mpl::na&lt;/span&gt;s are default template arguments used by Boost.MPL to emulate variadic class templates; similarly,&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;boost::tuples::null_type&lt;/span&gt; is the default argument for non-variadic &lt;span style=&quot;font-family: courier;&quot;&gt;boost::tuple&lt;/span&gt;. With the upgrade, the corresponding type names are:&amp;nbsp;&lt;/p&gt;
&lt;div&gt;&lt;pre class=&quot;prettyprint&quot; style=&quot;font-size: 80%;&quot;&gt;class boost::multi_index::multi_index_container&amp;lt;
  struct element,
  struct boost::multi_index::indexed_by&amp;lt;
    struct boost::multi_index::random_access&amp;lt;
      struct boost::multi_index::tag&amp;lt;struct i0&amp;gt;
    &amp;gt;,
    struct boost::multi_index::ordered_unique&amp;lt;
      struct boost::multi_index::tag&amp;lt;struct i1&amp;gt;,
      struct boost::multi_index::composite_key&amp;lt;
        struct element,
        struct boost::multi_index::member&amp;lt;struct element, int, 0&amp;gt;,
        struct boost::multi_index::member&amp;lt;struct element, int, 4&amp;gt;
      &amp;gt;,
      void
    &amp;gt;
  &amp;gt;,
  class std::allocator&amp;lt;struct element&amp;gt;
&amp;gt;

class boost::multi_index::detail::random_access_index&amp;lt;
  struct boost::multi_index::detail::nth_layer&amp;lt;
    1,
    struct element,
    struct boost::multi_index::indexed_by&amp;lt;
      struct boost::multi_index::random_access&amp;lt;
        struct boost::multi_index::tag&amp;lt;struct i0&amp;gt;
      &amp;gt;,
      struct boost::multi_index::ordered_unique&amp;lt;
        struct boost::multi_index::tag&amp;lt;struct i1&amp;gt;,
        struct boost::multi_index::composite_key&amp;lt;
          struct element,
          struct boost::multi_index::member&amp;lt;struct element, int, 0&amp;gt;,
          struct boost::multi_index::member&amp;lt;struct element, int, 4&amp;gt;
        &amp;gt;,
        void
      &amp;gt;
    &amp;gt;,
    class std::allocator&amp;lt;struct element&amp;gt;
  &amp;gt;,
  struct boost::multi_index::tag&amp;lt;struct i0&amp;gt;
&amp;gt;&lt;/pre&gt;&lt;/div&gt;
&lt;p&gt;Terser type names are beneficial when inspecting compile error messages related to the use of the library. Internal symbol names are also drastically reduced, which can improve compile and link times.&lt;/p&gt;&lt;p&gt;&lt;a name=&quot;warning&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Faster compilation&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;We have measured compile times for a synthetic example program using Boost.MultiIndex 1.90 and the upcoming 1.91 version under Clang 20, GCC 15 and Visual Studio 2022 (benchmark setup &lt;a href=&quot;https://github.com/joaquintides/multi_index_compile_time_1_91_vs_1_90&quot;&gt;here&lt;/a&gt;). The new version is faster in all three compilers by around 20% (Clang 1.19x, GCC 1.25x, Visual Studio 1.20x). Your mileage of course may vary.&lt;/p&gt;&lt;p&gt;&lt;a name=&quot;warning&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Backwards compatibility&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;We foresee that most existing users of Boost.MultiIndex won&#39;t be affected by the upgrade beyond the collateral benefits described above. Some changes to user code may be needed, though, in rare situations:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;If you were using Boost.MPL to synthezise or analyze the typelists featured in the library, your code will stop working as these are now Boost.Mp11 lists. If you are not in a position to do the necessary changes, the old Boost.MPL-based frontend can be restored by globally defining the macro &lt;span style=&quot;font-family: courier;&quot;&gt;BOOST_MULTI_INDEX_ENABLE_MPL_SUPPORT&lt;/span&gt;.&lt;/li&gt;&lt;li&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;composite_key::key_extractors&lt;/span&gt; returns a &lt;span style=&quot;font-family: courier;&quot;&gt;std::tuple&lt;/span&gt; instead of a &lt;span style=&quot;font-family: courier;&quot;&gt;boost::tuple&lt;/span&gt; (and similarly for&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;composite_key_equal_to&lt;/span&gt;, &lt;span style=&quot;font-family: courier;&quot;&gt;composite_key_compare&lt;/span&gt; and&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;composite_key_hash&lt;/span&gt;). This change is needed because &lt;span style=&quot;font-family: courier;&quot;&gt;boost::tuple&lt;/span&gt; has a limit on the number of arguments it accepts, which is no longer the case for&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;composite_key&lt;/span&gt;. If you&#39;re using&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;key_extractors&lt;/span&gt;, chances are you may need to modify your code (for instance,&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;std::tuple&lt;/span&gt; does not provide member function extractors of the form &lt;span style=&quot;font-family: courier;&quot;&gt;t.get&amp;lt;N&amp;gt;()&lt;/span&gt;).&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;&lt;a name=&quot;warning&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Call to action&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;If you&#39;re a Boost.MultiIndex user, please test your project with the new version of the library to ensure there won&#39;t be any issues with the upgrade; Boost 1.91 will ship in April 2026, so as of this writing there&#39;s still plenty of time to fix any detected problem. The simplest way to do the test is to clone the develop branch of &lt;a href=&quot;https://github.com/boostorg/multi_index&quot;&gt;boostorg/multi_index&lt;/a&gt; and add its &lt;span style=&quot;font-family: courier;&quot;&gt;include&lt;/span&gt; directory to your include list &lt;i&gt;before&lt;/i&gt; the path to your local installation of Boost. Please report your results through &lt;a href=&quot;https://github.com/boostorg/multi_index?tab=readme-ov-file#support&quot;&gt;the usual channels&lt;/a&gt;. Thank you!&lt;/p&gt;&lt;p&gt;&amp;nbsp;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/3395523721832873193/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2025/12/boostmultiindex-refactored.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/3395523721832873193'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/3395523721832873193'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2025/12/boostmultiindex-refactored.html' title='Boost.MultiIndex refactored'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-8604852498936732126</id><published>2025-11-14T19:50:00.003+01:00</published><updated>2025-11-16T21:07:20.026+01:00</updated><title type='text'>Comparing the run-time performance of Fil-C and ASAN</title><content type='html'>&lt;p&gt;After the publication of the &lt;a href=&quot;https://bannalia.blogspot.com/2025/11/some-experiments-with-boostunordered-on.html&quot;&gt;experiments with Boost.Unordered on Fil-C&lt;/a&gt;, some readers asked for a comparison of run-time performances between &lt;a href=&quot;https://fil-c.org/&quot;&gt;Fil-C&lt;/a&gt; and &lt;a href=&quot;https://clang.llvm.org/docs/AddressSanitizer.html&quot;&gt;Clang&#39;s AddressSanitizer&lt;/a&gt; (ASAN).&lt;/p&gt;&lt;p&gt;&lt;a name=&quot;warning&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Warning&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;Please do not construe this article as implying that Fil-C and ASAN are competing technologies within the same application space. Whereas ASAN is designed to detect bugs resulting in memory access violations, Fil-C sports a stricter notion of memory safety including UB situations where a pointer is directed to a valid memory region that is nonetheless out of bounds with respect to the pointer&#39;s provenance. That said, there&#39;s some overlapping between both tools, so it&#39;s only natural to question about their relative impact on execution times.&lt;/p&gt;&lt;p&gt;&lt;a name=&quot;results&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Results&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Our previous&amp;nbsp;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map_fil-c&quot;&gt;benchmarking repo&lt;/a&gt; has been updated to include results for plain Clang 18, Clang 18 with ASAN enabled, and Fil-C&amp;nbsp;v0.674, all with release mode settings. The following figures show&amp;nbsp;execution times in ns per element for Clang/ASAN (solid lines) and Fil-C (dashed lines) for three &lt;a href=&quot;https://www.boost.org/doc/libs/latest/libs/unordered/doc/html/unordered/intro.html&quot;&gt;Boost.Unordered&lt;/a&gt; containers (&lt;span style=&quot;font-family: courier;&quot;&gt;boost::unordered_map&lt;/span&gt;,&amp;nbsp;&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;boost::unordered_flat_map&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;boost::unordered_node_map&lt;/span&gt;) and four scenarios.&lt;/p&gt;

&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4gG1FfoBIXXan5fgTSHx_gcG-rLUgW0x-a4VWmnOvEtNiasZnKK52tynm3xEbO0zUvDoJGTjVNYnKuH-J3fFjezP1gCUbmXukwZyQlQtKD0WXRGtTVVP3KVv0NxjEtsMaq3-11bl-GP_UAbTvKAPsNXaGlHd8tmjr_UhPnN3Dr2nBxjztGXZKOCJ7wgw/s949/running_insertion.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;611&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4gG1FfoBIXXan5fgTSHx_gcG-rLUgW0x-a4VWmnOvEtNiasZnKK52tynm3xEbO0zUvDoJGTjVNYnKuH-J3fFjezP1gCUbmXukwZyQlQtKD0WXRGtTVVP3KVv0NxjEtsMaq3-11bl-GP_UAbTvKAPsNXaGlHd8tmjr_UhPnN3Dr2nBxjztGXZKOCJ7wgw/w275/running_insertion.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;    
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxFKeK8FH-QWJMjnE8p_flnJLnatFTAXQmY5kuY_1kFRSUaFjj2D8QW1wQvm-oBxT4Rd6b5KO7AqL6SiTwpLFgSDe3ChmE35tFi7oJuc5Y0Wqjlx8tHZJrlOsVoeeQ-buzEcIWNW9d1dRwI7SsOPU24YmS2bBQpz8YY_tPsID-SMJjhJ9yz4QEcmVAvK0/s949/running_erasure.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;611&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjxFKeK8FH-QWJMjnE8p_flnJLnatFTAXQmY5kuY_1kFRSUaFjj2D8QW1wQvm-oBxT4Rd6b5KO7AqL6SiTwpLFgSDe3ChmE35tFi7oJuc5Y0Wqjlx8tHZJrlOsVoeeQ-buzEcIWNW9d1dRwI7SsOPU24YmS2bBQpz8YY_tPsID-SMJjhJ9yz4QEcmVAvK0/w275/running_erasure.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Running insertion&lt;/b&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Running erasure&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;br /&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;     
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhi1u_Y4oFt9JcLs0lbVxQlFJ-nOqVAyX1VlVcZfuPc5fJ76vLVic9CPKIP2LV8c74eNO_kpts6HEInH2NucZU1PySxmhqJ50yu5ESIIYZe8UGy7dgB7UuloVXZEWpcxynxWr9s5fIfwp_Dt4oGnDRZwSjC6j-X8-4gDBtnSx61HkV_fNnl4D0IUIjXh5Q/s949/successful_lookup.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;611&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhi1u_Y4oFt9JcLs0lbVxQlFJ-nOqVAyX1VlVcZfuPc5fJ76vLVic9CPKIP2LV8c74eNO_kpts6HEInH2NucZU1PySxmhqJ50yu5ESIIYZe8UGy7dgB7UuloVXZEWpcxynxWr9s5fIfwp_Dt4oGnDRZwSjC6j-X8-4gDBtnSx61HkV_fNnl4D0IUIjXh5Q/w275/successful_lookup.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRgVCbaAAV7JBWJASnQ2eGQ6rHQarIUIV17F4xcIjzJZmXlp3GSFkwcRB-VhACfWKqW60xNqOev2ZPbKKm50jYUE1-9id1LfMhW-eJt0CmmRsIZ9XsPmk3W2cpnXynvChNvNCx0ZMdBPxz0-eTuCKzy35Bzrb2htmYO297LWA7TmWeZUVk0gRzF9nMqk4/s949/unsuccessful_lookup.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;611&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRgVCbaAAV7JBWJASnQ2eGQ6rHQarIUIV17F4xcIjzJZmXlp3GSFkwcRB-VhACfWKqW60xNqOev2ZPbKKm50jYUE1-9id1LfMhW-eJt0CmmRsIZ9XsPmk3W2cpnXynvChNvNCx0ZMdBPxz0-eTuCKzy35Bzrb2htmYO297LWA7TmWeZUVk0gRzF9nMqk4/w275/unsuccessful_lookup.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt; 
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Successful lookup&lt;/b&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Unsuccessful lookup&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;In summary:&lt;br /&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Insertion:&lt;br /&gt;Fil-C is between&amp;nbsp;1.8x slower and&amp;nbsp;4.1x faster than ASAN (avg. 1.3x faster).&lt;/li&gt;&lt;li&gt;Erasure:&lt;br /&gt;Fil-C is between 1.3x slower and 9.2x faster than ASAN (avg. 1.9x faster).&lt;/li&gt;&lt;li&gt;Successful lookup:&lt;br /&gt;Fil-C is between 2.5x slower and 1.9x faster than ASAN (avg. 1.6x slower).&lt;/li&gt;&lt;li&gt;Unsuccessful lookup:&lt;br /&gt;Fil-C is between 2.6x slower&amp;nbsp; and 1.4x faster than ASAN (avg. 1.9x slower).&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;So, results don&#39;t allow us to establish a clear-cut &quot;winner&quot;. When allocation/deallocation is involved, Fil-C seems to perform better (except for insertion when the memory working set gets past a certain threshold). For lookup, Fil-C is generally worse, and, again, the gap increases as more memory is used. A deeper analysis would require knowledge of the internals of both tools that I, unfortunately, lack.&lt;/p&gt;&lt;p&gt;&lt;a name=&quot;memory_usage&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;(Update Nov 16, 2025) Memory usage&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/p&gt;&lt;p&gt;By request, we&#39;ve measured peak memory usage in GB (as reported by &lt;span style=&quot;font-family: courier;&quot;&gt;time -v&lt;/span&gt;) for the three environments and three scenarios (insertion, erasure, combined successful and unsuccessful lookup) involving five different containers from Boost.Unordered and Abseil, each holding 10M elements. The combination of containers doesn&#39;t allow us to discern how closed vs. open addressing affect memory usage overheads in ASAN and Fil-C.&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgumNslw1MD6xXDMBxFxMPCoY5-p6g7Am5vMkLhgiUefCjlUiGXiy3ctdIX-Kv-3j2CJJQNEmmeN7hQzjCFKEc5ddnDyDXsvtLswOhsuhQGWtsd9-CW0dO1RlLaI0hEpqUwWsdaTShdX-Ffcu4W7Zty-CjEPcaJKQLZEe7dFJxksn9yg7GFpcHX3HvURv8/s646/memory_usage.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;404&quot; data-original-width=&quot;646&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgumNslw1MD6xXDMBxFxMPCoY5-p6g7Am5vMkLhgiUefCjlUiGXiy3ctdIX-Kv-3j2CJJQNEmmeN7hQzjCFKEc5ddnDyDXsvtLswOhsuhQGWtsd9-CW0dO1RlLaI0hEpqUwWsdaTShdX-Ffcu4W7Zty-CjEPcaJKQLZEe7dFJxksn9yg7GFpcHX3HvURv8/w550/memory_usage.png&quot; width=&quot;550&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;ASAN uses between 2.1x and 2.6x more memory than regular Clang, whereas Fil-C ranges between 1.8x and 3.9x. Results are again a mixed bag. Fil-C performs worst for the erasure scenario, perhaps because of delays in memory reclamation by this technology&#39;s embedded garbage collector.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/8604852498936732126/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2025/11/comparing-run-time-performance-of-fil-c.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8604852498936732126'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8604852498936732126'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2025/11/comparing-run-time-performance-of-fil-c.html' title='Comparing the run-time performance of Fil-C and ASAN'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi4gG1FfoBIXXan5fgTSHx_gcG-rLUgW0x-a4VWmnOvEtNiasZnKK52tynm3xEbO0zUvDoJGTjVNYnKuH-J3fFjezP1gCUbmXukwZyQlQtKD0WXRGtTVVP3KVv0NxjEtsMaq3-11bl-GP_UAbTvKAPsNXaGlHd8tmjr_UhPnN3Dr2nBxjztGXZKOCJ7wgw/s72-w275-c/running_insertion.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-7116981756330663969</id><published>2025-11-10T18:44:00.001+01:00</published><updated>2025-11-10T18:47:50.936+01:00</updated><title type='text'>Some experiments with Boost.Unordered on Fil-C</title><content type='html'>&lt;p&gt;&lt;a href=&quot;https://fil-c.org/&quot;&gt;Fil-C&lt;/a&gt; is a C and C++ compiler built on top of LLVM that adds run-time memory-safety mechanisms preventing out-of-bounds and use-after-free accesses. This naturally comes at a price in execution time, so I was curious about how much of a penalty that is for a performance-oriented, relatively low-level library like &lt;a href=&quot;https://www.boost.org/doc/libs/latest/libs/unordered/doc/html/unordered/intro.html&quot;&gt;Boost.Unordered&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;&lt;a name=&quot;simd-accelerated-lookup&quot;&gt;&lt;/a&gt;&lt;/p&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;a name=&quot;simd-accelerated-lookup&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Compiling and testing&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;From the user&#39;s perspective, Fil-C is basically a Clang clone, so it is fairly easy to integrate in previously existing toolchains.&amp;nbsp;&lt;a href=&quot;https://github.com/joaquintides/fil-c_boost_unordered&quot;&gt;This repo&lt;/a&gt; shows how to plug Fil-C into Boost.Unordered&#39;s CI, which runs on GitHub Actions and is powered by Boost&#39;s own &lt;a href=&quot;https://www.boost.org/doc/libs/latest/tools/build/doc/html/index.html&quot;&gt;B2&lt;/a&gt;&amp;nbsp;build system. The most straightforward way to make B2 use Fil-C is by having a &lt;span style=&quot;font-family: courier;&quot;&gt;user-config.jam&lt;/span&gt; file like this:
&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;using clang : : fil++ ;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;which instructs B2 to use the &lt;span style=&quot;font-family: courier;&quot;&gt;clang&lt;/span&gt; toolset with the only change that the compiler name is not the default &lt;span style=&quot;font-family: courier;&quot;&gt;clang++&lt;/span&gt; but &lt;span style=&quot;font-family: courier;&quot;&gt;fil++&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;We&#39;ve encountered only minor difficulties during the process:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&amp;nbsp;In the enviroment used (Linux x64), B2 automatically includes &lt;span style=&quot;font-family: courier;&quot;&gt;--target=x86_64-pc-linux&lt;/span&gt; as part of the commandline, which confuses the adapted version of libc++ shipping with Fil-C. This option had to be overridden with &lt;span style=&quot;font-family: courier;&quot;&gt;--target=x86_64-unknown-linux-gnu&lt;/span&gt; (which is the default for Clang).&lt;/li&gt;&lt;li&gt;As of this writing, Fil-C does not accept inline assembly code (&lt;span style=&quot;font-family: courier;&quot;&gt;asm&lt;/span&gt; or&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;__asm__&lt;/span&gt; blocks), which Boost.Unordered uses to provide embedded &lt;a href=&quot;https://www.boost.org/doc/libs/latest/libs/unordered/doc/html/unordered/debuggability.html#debuggability_gdb_pretty_printers&quot;&gt;GDB pretty-printers&lt;/a&gt;. The feature was disabled with the macro &lt;span style=&quot;font-family: courier;&quot;&gt;BOOST_ALL_NO_EMBEDDED_GDB_SCRIPTS&lt;/span&gt;.&amp;nbsp;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;Other than this, the extensive Boost.Unordered test suite compiled and ran successfully, &lt;i&gt;except&lt;/i&gt;&amp;nbsp;for some tests involving Boost.Interprocess, which uses inline assembly in some places. CI completed in around 2.5x the time it takes with a regular compiler. It is worth noting that Fil-C happily accepted SSE2 SIMD intrinsics crucially used by Boost.Unordered.&lt;/p&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;a name=&quot;simd-accelerated-lookup&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Run-time performance&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;We ran some performance tests compiled with Fil-C v0.674 on a Linux machine, release settings (benchmark code and setup&amp;nbsp;&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map&quot;&gt;here&lt;/a&gt;). The figures show execution times in ns per element for Clang 15 (solid lines) and Fil-C (dashed lines) and three containers: &lt;span style=&quot;font-family: courier;&quot;&gt;boost::unordered_map&lt;/span&gt; (closed-addressing hashmap), and &lt;span style=&quot;font-family: courier;&quot;&gt;boost::unordered_flat_map&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;boost::unordered_node_map&lt;/span&gt; (open addressing).&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjERzhwx0K7Ddd5sURbXDzgPsFoBrMIZ_FYW5G5jOcY0utkXslPOFQDt1jZF-7pwmvMo5UEl82QgR1Z5Vfwtz2WRyS-EzgBOp7-M05vhyphenhyphen3naOt77IgLsy931fgo1tDh5iA6WEiszSQehwULLnOQwST5wmSKB4yTBv-i2HO89aPKVrwQc-tT3E9lIHv-MaE/s949/running_insertion.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;612&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjERzhwx0K7Ddd5sURbXDzgPsFoBrMIZ_FYW5G5jOcY0utkXslPOFQDt1jZF-7pwmvMo5UEl82QgR1Z5Vfwtz2WRyS-EzgBOp7-M05vhyphenhyphen3naOt77IgLsy931fgo1tDh5iA6WEiszSQehwULLnOQwST5wmSKB4yTBv-i2HO89aPKVrwQc-tT3E9lIHv-MaE/w275/running_insertion.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;    
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxHtMVCI-wU3Ur0dH67F5x6AvTFyzuWAjCuq1Zq_rgKTjJ4tfw-1yEPC7vLL4_bqvzqUX-R0L9XwUVudvyIueqeCAU8luG7Y66NHaTXqn7-i2mQga1pszZsbtPMQIgnZypIgWUAP6Z53bTigLngwEXqdaKNW0x5Gy9onnj0C2AxDlWeOexX157B4yzHn8/s949/running_erasure.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;612&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgxHtMVCI-wU3Ur0dH67F5x6AvTFyzuWAjCuq1Zq_rgKTjJ4tfw-1yEPC7vLL4_bqvzqUX-R0L9XwUVudvyIueqeCAU8luG7Y66NHaTXqn7-i2mQga1pszZsbtPMQIgnZypIgWUAP6Z53bTigLngwEXqdaKNW0x5Gy9onnj0C2AxDlWeOexX157B4yzHn8/w275/running_erasure.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;&lt;br /&gt;&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Running insertion&lt;/b&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Running erasure&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;br /&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;     
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsnjT6xKxPEiT4ClF36ZqxGMrodTJa4F2MGOcyJDwvGDb8odMh6e2CI2wJp-9Yvn2WO6tmg_HVZHSQ-hdPTYdYacxtZY3a_1wuDnMpNh8ApoR4BPI7WeWqJvay3MSqhBrcgdV4y0mokvuXH97PpoChRQ0V6zS5tswaRDm14Hy8GRja5GqazuX-NTN4cVA/s949/successful_lookup.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;611&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsnjT6xKxPEiT4ClF36ZqxGMrodTJa4F2MGOcyJDwvGDb8odMh6e2CI2wJp-9Yvn2WO6tmg_HVZHSQ-hdPTYdYacxtZY3a_1wuDnMpNh8ApoR4BPI7WeWqJvay3MSqhBrcgdV4y0mokvuXH97PpoChRQ0V6zS5tswaRDm14Hy8GRja5GqazuX-NTN4cVA/w275/successful_lookup.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzhJ-I5ZOnZiNwx1-VLIIl4aCEv5Qx_holt84NyFks5iLlWMHFEM8QbUygqHccXUnTxK-Z8Ox5b0per89a6eVsCUeVZhF6XnnFJgwEsSGfNTysN35TUxKNxAFL9ra8-i43y1fuZrpYVAe6locqbtArsjcb8VlX1qoZrzgIWxRwjf5cQWr_vb0s0VQqQ0I/s949/unsuccessful_lookup.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;611&quot; data-original-width=&quot;949&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzhJ-I5ZOnZiNwx1-VLIIl4aCEv5Qx_holt84NyFks5iLlWMHFEM8QbUygqHccXUnTxK-Z8Ox5b0per89a6eVsCUeVZhF6XnnFJgwEsSGfNTysN35TUxKNxAFL9ra8-i43y1fuZrpYVAe6locqbtArsjcb8VlX1qoZrzgIWxRwjf5cQWr_vb0s0VQqQ0I/w275/unsuccessful_lookup.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt; 
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Successful lookup&lt;/b&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Unsuccessful lookup&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;

&lt;p&gt;Execution with Fil-C is around 2x-4x slower, with wide variations depending on the benchmarked scenario and container of choice. Closed-addressing &lt;span style=&quot;font-family: courier;&quot;&gt;boost::unordered_map&lt;/span&gt; is the container experiencing the largest degradation, presumably because it does the most amount of pointer chasing.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/7116981756330663969/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2025/11/some-experiments-with-boostunordered-on.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/7116981756330663969'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/7116981756330663969'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2025/11/some-experiments-with-boostunordered-on.html' title='Some experiments with Boost.Unordered on Fil-C'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjERzhwx0K7Ddd5sURbXDzgPsFoBrMIZ_FYW5G5jOcY0utkXslPOFQDt1jZF-7pwmvMo5UEl82QgR1Z5Vfwtz2WRyS-EzgBOp7-M05vhyphenhyphen3naOt77IgLsy931fgo1tDh5iA6WEiszSQehwULLnOQwST5wmSKB4yTBv-i2HO89aPKVrwQc-tT3E9lIHv-MaE/s72-w275-c/running_insertion.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-2846012463579415672</id><published>2025-10-04T17:36:00.005+02:00</published><updated>2025-10-05T00:52:23.234+02:00</updated><title type='text'>Bulk operations in Boost.Bloom</title><content type='html'>&lt;script async=&quot;&quot; src=&quot;https://cdn.jsdelivr.net/npm/mathjax@3/es5/tex-mml-chtml.js&quot; type=&quot;text/javascript&quot;&gt;
&lt;/script&gt;

&lt;p&gt;Starting in Boost 1.90, &lt;a href=&quot;https://github.com/boostorg/bloom&quot;&gt;Boost.Bloom&lt;/a&gt; will provide so-called &lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/bloom/doc/html/bloom.html#tutorial_bulk_operations&quot;&gt;bulk operations&lt;/a&gt;, which, in general, can speed up insertion and lookup by a sizable factor. The key idea behind this optimization is to separate in time the calculation of a position in the Bloom filter&#39;s array from its actual access. For instance, if this is the the algorithm for regular insertion into a Bloom filter with &lt;i&gt;k&lt;/i&gt; = 1 (all the snippets in this article are simplified, illustrative versions of the actual source code):&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;void insert(const value_type&amp;amp; x)
{
  auto h = hash(x);
  auto p = position(h);
  set(position, 1);
}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;then the bulk-mode variant for insertion of a range of values would look like:&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;void insert(const std::array&amp;lt;value_type, N&amp;gt;&amp;amp; x)
{
  std::size_t positions[N];
  
  // pipeline position calculation and memory access
  
  for(std::size_t i = 0; i &amp;lt; N; ++i) {
    auto h = hash(x[i]);
    positions[i] = position(h);
    prefetch(positions[i]);
  }
  
  for(std::size_t i = 0; i &amp;lt; N; ++i) {
    set(positions[i], 1);
  }
}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;By prefetching the address of &lt;span style=&quot;font-family: courier;&quot;&gt;positions[i]&lt;/span&gt; way in advance of its actual usage in &lt;span style=&quot;font-family: courier;&quot;&gt;set(positions[i], 1)&lt;/span&gt;, we make sure that the latter is accessing a cached value and avoid (or minimize) the CPU stalling that would result from reaching out to cold memory. We have studied bulk optimization in more detail in the context of &lt;span style=&quot;font-family: courier;&quot;&gt;&lt;a href=&quot;https://bannalia.blogspot.com/2023/10/bulk-visitation-in-boostconcurrentflatm.html&quot;&gt;boost::concurrent_flat_map&lt;/a&gt;&lt;/span&gt;. You can see actual measurements of the performance gains achieved in a &lt;a href=&quot;https://github.com/boostorg/boost_bloom_benchmarks/tree/bulk-operations&quot;&gt;dedicated repo&lt;/a&gt;; as expected, gains are higher for larger bit arrays not fitting in the lower levels of the CPU&#39;s cache hierarchy.&lt;/p&gt;&lt;p&gt;From an algorithmic point of view, the most interesting case is that of lookup operations for &lt;i&gt;k&lt;/i&gt; &amp;gt; 1, since the baseline non-bulk procedure is not easily amenable to pipelining:&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;bool may_contain(const value_type&amp;amp; x)
{ 
  auto h = hash(x);
  for(int i = 0; i &amp;lt; k; ++i) {
    auto p = position(h);
    if(check(position) == false) return false;
    if(i &amp;lt; k - 1) h = next_hash(h);
  }
  return true;
}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This algorithm is branchful and can take anywhere from 1 to &lt;i&gt;k&lt;/i&gt; iterations, the latter being the case for elements present in the filter and false positives. For instance, this diagram shows the number of steps taken to look up &lt;i&gt;n&lt;/i&gt; = 64 elements on a filter with &lt;i&gt;k&lt;/i&gt; = 10 and FPR = 1%, where the successful lookup rate (proportion of looked up elements actually in the filter) is&amp;nbsp;&lt;i&gt;p&lt;/i&gt; = 0.1:&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghuL1548yAQ7nSPDG2IlKhHYTyf5HJIQyACJxM7YyZ8T82bQqPuokjwzYW12y-2hLruHs1d1UUGnJAJuqCs1R2RIxP5HE7nl2JXq9ddCoWRr-572btVtrh_Qzv4V6cd1PkpgJR2zJhb_v9bRtkKpYBxFr69wynBV4uAfhP2sXr4Rn2VLB7y61mcqvC2FQ/s846/fig1.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;207&quot; data-original-width=&quot;846&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghuL1548yAQ7nSPDG2IlKhHYTyf5HJIQyACJxM7YyZ8T82bQqPuokjwzYW12y-2hLruHs1d1UUGnJAJuqCs1R2RIxP5HE7nl2JXq9ddCoWRr-572btVtrh_Qzv4V6cd1PkpgJR2zJhb_v9bRtkKpYBxFr69wynBV4uAfhP2sXr4Rn2VLB7y61mcqvC2FQ/w600/fig1.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;As it can be seen, for non-successful lookups &lt;span style=&quot;font-family: courier;&quot;&gt;may_contain&lt;/span&gt;&amp;nbsp;typically stops at the first few positions: the average number of positions checked (grayed cells) is&amp;nbsp; \[n\left(pk +(1-p)\frac{1-p_b^{k}}{1-p_b}\right),\] where \(p_b=\sqrt[k]{\text{FPR}}\) is the probability that an arbitrary bit in the filter&#39;s array is set to 1. In the example used, this results in only 34% of the total &lt;i&gt;nk&lt;/i&gt; = 640 positions being checked.&lt;/p&gt;&lt;p&gt;Now, a naïve bulk-mode version could look as follows:&lt;/p&gt;


&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename F&amp;gt;
void may_contain(
  const std::array&amp;lt;value_type, N&amp;gt;&amp;amp; x,
  F f) // f is fed lookup results
{ 
  std::uint64_t hashes[N];
  std::size_t   positions[N];
  bool          results[N];
  
  // initial round of hash calculation and prefetching
  
  for(std::size_t i = 0; i &amp;lt; N; ++i) {
    hashes[i] = hash(x[i]);
    positions[i] = position(hashes[i]);
    results[i] = true;
    prefetch(positions[i]);
  }
  
  // main loop
  
  for(int j = 0; j &amp;lt; k; ++i) {
    for(std::size_t i = 0; i &amp;lt; N; ++i) {
      if(results[i]) { &lt;b&gt;// conditional branch X&lt;/b&gt;
        results[i] &amp;amp;= check(positions[i]);
        if(results[i] &amp;amp;&amp;amp; j &amp;lt; k - 1) {
          hashes[i] = next_hash(hashes[i]);
          positions[i] = position(hashes[i]);
          prefetch(positions[i]);
        }
      }
    }
  }
  
  // feed results
  
  for(int i = 0; i &amp;lt; k; ++i) {
    f(results[i]);
  }
}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This simply stores partial results in an array and iterates row-first instead of column-first:&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0CVYAvTh4Tx3y_UKwoaUZ5wejmduEz5D1QuQ9PsZ56YtZRP0rXh53wVGYFmpzZdtp4qC8o_Vt6x8qVzs1cQ4NDkPRr07ubg1nXN4gFxPvQBiFWfgvR3RwqwJC1Rf8Darmv8WcHPRMcG38xbdgSSox_gWtHLbM4G03fcOWKjqPO69jbMtKeFtMxHleOqc/s811/fig2.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;121&quot; data-original-width=&quot;811&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj0CVYAvTh4Tx3y_UKwoaUZ5wejmduEz5D1QuQ9PsZ56YtZRP0rXh53wVGYFmpzZdtp4qC8o_Vt6x8qVzs1cQ4NDkPRr07ubg1nXN4gFxPvQBiFWfgvR3RwqwJC1Rf8Darmv8WcHPRMcG38xbdgSSox_gWtHLbM4G03fcOWKjqPO69jbMtKeFtMxHleOqc/w600/fig2.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;The problem with this approach is that, even though it calls &lt;span style=&quot;font-family: courier;&quot;&gt;check&lt;/span&gt; exactly the same number of times as the non-bulk algorithm, the conditional branch labeled &lt;b&gt;X&lt;/b&gt; is executed \(nk\) times, and this has a huge impact on the CPU&#39;s branch predictor. Conditional branches could in principle be eliminated altogether:&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;for(int j = 0; j &amp;lt; k; ++i) {
  for(std::size_t i = 0; i &amp;lt; N; ++i) {
    results[i] &amp;amp;= check(positions[i]);
    if(j &amp;lt; k - 1) { // this check is optimized away at compile time
      hashes[i] = next_hash(hashes[i]);
      positions[i] = position(hashes[i]);
      prefetch(positions[i]);
    }
  }
}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;but this would result in \(nk\) calls to&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;check&lt;/span&gt;&amp;nbsp;for ~3 times more computational work than the non-bulk version.&lt;/p&gt;&lt;p&gt;The challenge then is to reduce the number of iterations on each row to only those positions that still need to be evaluated. This is the solution adopted by Boost.Bloom:&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename F&amp;gt;
void may_contain(
  const std::array&amp;lt;value_type, N&amp;gt;&amp;amp; x,
  F f) // f is fed lookup results
{ 
  std::uint64_t hashes[N];
  std::size_t   positions[N];
  std::uint64_t results = 0; // mask of N bits
  
  // initial round of hash calculation and prefetching
  
  for(std::size_t i = 0; i &amp;lt; N; ++i) {
    hashes[i] = hash(x[i]);
    positions[i] = position(hashes[i]);
    results |= 1ull &amp;lt;&amp;lt; i;
    prefetch(positions[i]);
  }
  
  // main loop
  
  for(int j = 0; j &amp;lt; k; ++i) {
    auto mask = results;
    if(!mask) break;
    do{
      auto i = std::countr_zero(mask);
      auto b = check(positions[i]);
      results &amp;amp;= ~(std::uint64_t(!b) &amp;lt;&amp;lt; i);
      if(j &amp;lt; k - 1) { // this check is optimized away at compile time
        hashes[i] = next_hash(hashes[i]);
        positions[i] = position(hashes[i]);
        prefetch(positions[i]);
      }
      mask &amp;amp;= mask - 1; // reset least significant 1
    } while(mask);
  }
  
  // feed results
  
  for(int i = 0; i &amp;lt; k; ++i) {
    f(results &amp;amp; 1);
    results &amp;gt;&amp;gt;= 1;
  }
}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Instead of an array of partial results, we keep these as a bitmask, so that we can skip groups of terminated columns in constant time using &lt;span style=&quot;font-family: courier;&quot;&gt;&lt;a href=&quot;https://en.cppreference.com/w/cpp/numeric/countr_zero.html&quot;&gt;std::countr_zero&lt;/a&gt;&lt;/span&gt;. For instance, in the 7th row the main loop does 11 iterations instead of &lt;i&gt;n&lt;/i&gt; = 64.&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuYeKUROiRxgoysQz-8pftmOwhvyQxuUI9NL1AAl3TvuGkamdDQeWz_I9iqfnGRaN-G46z7Z9u4De9VMmJTuOk8NJPLtteQbRHistbQK8444qTtgG3EC800YaeDgtSSzjVybDaMzFySzqiMgBcHQMj11cjwhryNE3ywRJJVehhq9LJK7FPqpAcHFCWpbM/s803/fig3.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;118&quot; data-original-width=&quot;803&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjuYeKUROiRxgoysQz-8pftmOwhvyQxuUI9NL1AAl3TvuGkamdDQeWz_I9iqfnGRaN-G46z7Z9u4De9VMmJTuOk8NJPLtteQbRHistbQK8444qTtgG3EC800YaeDgtSSzjVybDaMzFySzqiMgBcHQMj11cjwhryNE3ywRJJVehhq9LJK7FPqpAcHFCWpbM/w600/fig3.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;In summary, the bulk version of &lt;span style=&quot;font-family: courier;&quot;&gt;may_contain&lt;/span&gt;&amp;nbsp;only does &lt;i&gt;n&lt;/i&gt; more conditional branches than the non-bulk version, plus \(n(1-p)\) superfluous memory fetches&amp;nbsp;&lt;span&gt;—the latter could be omitted at the expense of&amp;nbsp;&lt;/span&gt;\(n(1-p)\) additional conditional branches, but benchmarks showed that the version with extra memory fetches is actually faster. These are measured speedups of bulk vs. non-bulk lookup for a &lt;span style=&quot;font-family: courier;&quot;&gt;boost::bloom::filter&amp;lt;int, K&amp;gt;&lt;/span&gt; containing 10M elements under GCC, 64-bit mode:&lt;/p&gt;

&lt;table border=&quot;1&quot; style=&quot;border-collapse: collapse; margin-left: auto; margin-right: auto; padding: 0pt; text-align: center; width: 80%;&quot;&gt;
  &lt;tbody&gt;&lt;tr&gt;
    &lt;th&gt;array&lt;br /&gt;size&lt;/th&gt;
    &lt;th&gt;K&lt;/th&gt;
    &lt;th&gt;&lt;i&gt;p&lt;/i&gt; = 1&lt;/th&gt;
    &lt;th&gt;&lt;i&gt;p&lt;/i&gt; = 0&lt;/th&gt;
    &lt;th&gt;&lt;i&gt;p&lt;/i&gt; = 0.1&lt;/th&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;8M&lt;/td&gt;
    &lt;td&gt;6&lt;/td&gt;
    &lt;td&gt;0.78&lt;/td&gt;
    &lt;td&gt;2.11&lt;/td&gt;
    &lt;td&gt;1.43&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;12M&lt;/td&gt;
    &lt;td&gt;9&lt;/td&gt;
    &lt;td&gt;1.54&lt;/td&gt;
    &lt;td&gt;2.27&lt;/td&gt;
    &lt;td&gt;1.38&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;16M&lt;/td&gt;
    &lt;td&gt;11&lt;/td&gt;
    &lt;td&gt;2.08&lt;/td&gt;
    &lt;td&gt;2.45&lt;/td&gt;
    &lt;td&gt;1.46&lt;/td&gt;
  &lt;/tr&gt;
  &lt;tr&gt;
    &lt;td&gt;20M&lt;/td&gt;
    &lt;td&gt;14&lt;/td&gt;
    &lt;td&gt;2.24&lt;/td&gt;
    &lt;td&gt;2.57&lt;/td&gt;
    &lt;td&gt;1.43&lt;/td&gt;
  &lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;&lt;p&gt;(More results &lt;a href=&quot;https://github.com/boostorg/boost_bloom_benchmarks/tree/bulk-operations&quot;&gt;here&lt;/a&gt;.)&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;a name=&quot;conclusions&quot;&gt;Conclusions&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Boost.Bloom will introduce bulk insertion and lookup capabilities in Boost 1.90, resulting in speedups of up to 3x, though results vary greatly depending on the filter configuration and its size, and may even have less performance than the regular case in some situations. We have shown how bulk lookup is implemented for the case &lt;i&gt;k&lt;/i&gt; &amp;gt; 1, where the regular, non-bulk version is highly branched and so not readily amenable to pipelining. The key technique, based on iteration reduction with &lt;span style=&quot;font-family: courier;&quot;&gt;std::countr_zero&lt;/span&gt;, can be applied outside the context of Boost.Bloom to implement efficient pipelining of early-exit operations.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/2846012463579415672/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2025/10/bulk-operations-in-boostbloom.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/2846012463579415672'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/2846012463579415672'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2025/10/bulk-operations-in-boostbloom.html' title='Bulk operations in Boost.Bloom'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghuL1548yAQ7nSPDG2IlKhHYTyf5HJIQyACJxM7YyZ8T82bQqPuokjwzYW12y-2hLruHs1d1UUGnJAJuqCs1R2RIxP5HE7nl2JXq9ddCoWRr-572btVtrh_Qzv4V6cd1PkpgJR2zJhb_v9bRtkKpYBxFr69wynBV4uAfhP2sXr4Rn2VLB7y61mcqvC2FQ/s72-w600-c/fig1.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-5914447325330122492</id><published>2025-07-06T20:24:00.003+02:00</published><updated>2025-07-06T21:01:56.239+02:00</updated><title type='text'>Maps on chains</title><content type='html'>&lt;p&gt;(From a conversation with Vassil Vassilev.) Suppose we want to have a C++ map where the keys are disjoint, integer intervals of the form [&lt;i&gt;a&lt;/i&gt;, &lt;i&gt;b&lt;/i&gt;]:&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;struct interval
{
  int min, max;
};

std::map&amp;lt;interval, std::string&amp;gt; m;

m[{0, 9}] = &quot;ABC&quot;;
m[{10, 19}] = &quot;DEF&quot;;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;This looks easy enough, we just have to write the proper comparison operator for intervals, right?&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;bool operator&amp;lt;(const interval&amp;amp; x, const interval&amp;amp; y)
{
  return x.max &amp;lt; y.min;
}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;But what happens if we try to insert an interval which is not disjoint with those already in the map?&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;m[{5, 14}] = &quot;GHI&quot;; // intersects both {0, 9} and {10, 19}&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;The short answer is that this is undefined behavior, but let&#39;s try to undertand why. C++ associative containers depend on the comparison function (typically, &lt;span style=&quot;font-family: courier;&quot;&gt;std::less&amp;lt;Key&amp;gt;&lt;/span&gt;) inducing a so-called &lt;i&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Weak_ordering#Strict_weak_orderings&quot;&gt;strict weak ordering&lt;/a&gt;&lt;/i&gt; on elements of&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;Key&lt;/span&gt;. In layman terms, a strict weak order &amp;lt; behaves as the &quot;less than&quot; relationship does for numbers, except that there may be incomparable elements &lt;i&gt;x&lt;/i&gt;, &lt;i&gt;y&lt;/i&gt; such that &lt;i&gt;x&lt;/i&gt; ≮ &lt;i&gt;y&lt;/i&gt; and&amp;nbsp;&lt;i&gt;y&lt;/i&gt;&amp;nbsp;≮ &lt;i&gt;x&lt;/i&gt;; for numbers, this only happens if &lt;i&gt;x&lt;/i&gt; = &lt;i&gt;y&lt;/i&gt;, but in the case of a general SWO we allow for distinct, incomparable elements as long as they form &lt;a href=&quot;https://en.wikipedia.org/wiki/Equivalence_relation&quot;&gt;equivalence classes&lt;/a&gt;. A convenient way to rephrase this condition is to require that incomparable elements are totally equivalent in how they compare to the rest of the elements, that is, they&#39;re truly indistinguishable from the point of view of the SWO. Getting back to our interval scenario, we have three possible cases when comparing [&lt;i&gt;a&lt;/i&gt;, &lt;i&gt;b&lt;/i&gt;] and [&lt;i&gt;c&lt;/i&gt;, &lt;i&gt;d&lt;/i&gt;]:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;If &lt;i&gt;b&lt;/i&gt; &amp;lt; &lt;i&gt;c&lt;/i&gt;, the intervals don&#39;t overlap and&amp;nbsp;[&lt;i&gt;a&lt;/i&gt;, &lt;i&gt;b&lt;/i&gt;] &amp;lt; [&lt;i&gt;c&lt;/i&gt;, &lt;i&gt;d&lt;/i&gt;].&lt;/li&gt;&lt;li&gt;If &lt;i&gt;d&lt;/i&gt; &amp;lt; &lt;i&gt;a&lt;/i&gt;, the intervals don&#39;t overlap and&amp;nbsp;[&lt;i&gt;c&lt;/i&gt;, &lt;i&gt;d&lt;/i&gt;] &amp;lt; [&lt;i&gt;a&lt;/i&gt;, &lt;i&gt;b&lt;/i&gt;].&lt;/li&gt;&lt;li&gt;Otherwise, the intervals are incomparable. This can happen when [&lt;i&gt;a&lt;/i&gt;, &lt;i&gt;b&lt;/i&gt;] and [&lt;i&gt;c&lt;/i&gt;, &lt;i&gt;d&lt;/i&gt;] overlap partially or when they are exactly the same interval.&lt;/li&gt;&lt;/ul&gt;

&lt;p&gt;What we have described is a well known relationship called &lt;a href=&quot;https://en.wikipedia.org/wiki/Interval_order&quot;&gt;interval order&lt;/a&gt;. The problem is that the interval order is &lt;i&gt;not&lt;/i&gt; a strict weak order. Let&#39;s depict a &lt;a href=&quot;https://en.wikipedia.org/wiki/Hasse_diagram&quot;&gt;Hasse diagram&lt;/a&gt; for the interval order on integer intervals [&lt;i&gt;a&lt;/i&gt;,&lt;i&gt;b&lt;/i&gt;] between 0 and 4:&lt;/p&gt;

&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzMEAXXxuwHfT0-NkbiBD4z0Z1pn_MeRHmL9r71K9fScdTpGVOu1OHNMyfYrrUtAJWTOsaK70d22qbbzmZViJO3M4dTWpNblNFJzPet6gLk9wtfAYZOn9Lu_i_em9kVqsWolBpbaS1fsCNOwk_TZ-MSethOtVgrsoMW8AUZuZqfDKpqxmtrUaHk0RX6Bg/s618/hasse.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;618&quot; data-original-width=&quot;482&quot; height=&quot;320&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzMEAXXxuwHfT0-NkbiBD4z0Z1pn_MeRHmL9r71K9fScdTpGVOu1OHNMyfYrrUtAJWTOsaK70d22qbbzmZViJO3M4dTWpNblNFJzPet6gLk9wtfAYZOn9Lu_i_em9kVqsWolBpbaS1fsCNOwk_TZ-MSethOtVgrsoMW8AUZuZqfDKpqxmtrUaHk0RX6Bg/s320/hasse.png&quot; width=&quot;250&quot; /&gt;&lt;/a&gt;&lt;/div&gt;

&lt;p&gt;A Hasse diagram works like this: given two elements &lt;i&gt;x&lt;/i&gt; and &lt;i&gt;y&lt;/i&gt;, &lt;i&gt;x&lt;/i&gt; &amp;lt; &lt;i&gt;y&lt;/i&gt; iff there is a path going upwards that connects &lt;i&gt;x&lt;/i&gt; to y. For instance, the fact that [1, 1] &amp;lt; [3, 4] is confirmed by the path [1, 1]&amp;nbsp;&lt;span data-huuid=&quot;269890367929564255&quot;&gt;&lt;span&gt;→ [2, 2]&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;span data-huuid=&quot;269890367929564255&quot;&gt;&lt;span&gt;→ [3, 4].&lt;/span&gt;&lt;/span&gt;&amp;nbsp;But the diagram also serves to show why this relationship is not a strict weak order: for it to be so, incomparable elements (those not connected) should be indistinguishable in that they are connected upwards and downwards with the same elements, and this is clearly not the case (in fact, it is not the case for any pair of incomparable elements). In mathematical terms, our relationship is of a more general type called a &lt;a href=&quot;https://en.wikipedia.org/wiki/Partially_ordered_set#Strict_partial_orders&quot;&gt;strict partial order&lt;/a&gt;.&lt;/p&gt;&lt;p&gt;Going back to C++, associative containers assume that the elements inserted form a linear arrangement with respect to &amp;lt;: when we try to insert a new element &lt;i&gt;y&lt;/i&gt; that is incomparable with some previously inserted element &lt;i&gt;x&lt;/i&gt;, the properties of strict weak orders allows us to determine that &lt;i&gt;x&lt;/i&gt; and &lt;i&gt;y&lt;/i&gt; are equivalent, so nothing breaks up (the insertion fails as a duplicate for a&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;std::map&lt;/span&gt;, or &lt;i&gt;y&lt;/i&gt; is added next to &lt;i&gt;x&lt;/i&gt; for a&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;std::multimap&lt;/span&gt;).&lt;/p&gt;&lt;p&gt;There&#39;s a way to accommodate our interval scenario with&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;std::map&lt;/span&gt;, though. As long as the elements we are inserting belong to the same connecting path or &lt;i&gt;chain&lt;/i&gt;,&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;std::map&lt;/span&gt;&amp;nbsp;can&#39;t possibly &quot;know&quot; if our relationship is a strict weak order or not: it certainly looks like one for the limited subset of elements it has seen so far. Implementation-wise, we just have to make sure we&#39;re not comparing partially overlapping intervals:&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;struct interval_overlap: std::runtime_error
{
  interval_overlap(): std::runtime_error(&quot;interval overlap&quot;){}
};

bool operator&amp;lt;(const interval&amp;amp; x, const interval&amp;amp; y)
{
  if(x.min == y.min) {
    if(x.max != y.max) throw interval_overlap();
    return false;
  }
  else if(x.min &amp;lt; y.min) {
    if(x.max &amp;gt;= y.min) throw interval_overlap();
    return true;
  }
  else /* x.min &amp;gt; y.min */
  {
    if(x.min &amp;lt;= y.max) throw interval_overlap();
    return false;
  }
}

std::map&amp;lt;interval, std::string&amp;gt; m;

m[{0, 9}] = &quot;ABC&quot;;
m[{10, 19}] = &quot;DEF&quot;;
m[{5, 14}] = &quot;GHI&quot;; // throws interval_overlap&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;So, when we try to insert an element that would violate the strict weak ordering constraints (that is, it lies outside the chain connecting the intervals inserted so far), an exception is thrown and no undefined behavior is hit. A strict reading of the standard would not allow this workaround, as it is required that the comparison object for the map induce a strict weak ordering for all possible values of &lt;span style=&quot;font-family: courier;&quot;&gt;Key&lt;/span&gt;, not only those in the container (or that is my interpretation, at least): for all practical purposes, though, this works and will foreseeably continue to work for all future revisions of the standard.&lt;/p&gt;&lt;p&gt;&lt;b&gt;Bonus point.&lt;/b&gt;&amp;nbsp;Thanks to &lt;a href=&quot;https://en.cppreference.com/w/cpp/functional.html#Transparent_function_objects&quot;&gt;heterogeneous lookup&lt;/a&gt;, we can extend our use case to support lookup for &lt;i&gt;integers&lt;/i&gt; inside the intervals:&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;struct less_interval
{
  using is_transparent = void;

  bool operator()(const interval&amp;amp; x, const interval&amp;amp; y) const
  {
    // as operator&amp;lt; before
  }

  bool operator()(int x, const interval&amp;amp; y) const
  {
    return x &amp;lt; y.min;
  }
  
  bool operator()(const interval&amp;amp; x, int y) const
  {
    return x.max &amp;lt; y; 
  }    
};

std::map&amp;lt;interval, std::string, less_interval&amp;gt; m;
  
m[{0, 9}] = &quot;ABC&quot;;
m[{10, 19}] = &quot;DEF&quot;;

std::cout &amp;lt;&amp;lt; m.find(5)-&amp;gt;second &amp;lt;&amp;lt; &quot;\n&quot;; // prints &quot;ABC&quot;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Exercise for the reader: Can you formally prove that this works? (Hint: define a strict weak order on ℕ&amp;nbsp;&lt;span class=&quot;box&quot;&gt;∪ &lt;i&gt;I&lt;/i&gt;, where &lt;/span&gt;ℕ is the set of natural numbers and &lt;i&gt;I&lt;/i&gt; is a collection of disjoint integer intervals.)&lt;/p&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/5914447325330122492/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2025/07/maps-on-chains.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5914447325330122492'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5914447325330122492'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2025/07/maps-on-chains.html' title='Maps on chains'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzMEAXXxuwHfT0-NkbiBD4z0Z1pn_MeRHmL9r71K9fScdTpGVOu1OHNMyfYrrUtAJWTOsaK70d22qbbzmZViJO3M4dTWpNblNFJzPet6gLk9wtfAYZOn9Lu_i_em9kVqsWolBpbaS1fsCNOwk_TZ-MSethOtVgrsoMW8AUZuZqfDKpqxmtrUaHk0RX6Bg/s72-c/hasse.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-1527440803497983238</id><published>2024-05-23T18:45:00.003+02:00</published><updated>2025-03-17T09:27:08.749+01:00</updated><title type='text'>WG21, Boost, and the ways of standardization</title><content type='html'>&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#goals-of-standardization&quot;&gt;Goals of standardization&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#standardizing-programming-languages&quot;&gt;Standardizing programming languages&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#wg21&quot;&gt;WG21&lt;/a&gt;&lt;/li&gt;&lt;ul&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#innovation-vs-adoption&quot;&gt;Innovation vs. adoption&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#pros-and-cons-of-standardization&quot;&gt;Pros and cons of standardization&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#an-assessment-model-for-library-standardization&quot;&gt;An assessment model for library standardization&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#boost-the-standard-and-beyond&quot;&gt;Boost, the standard and beyond&lt;/a&gt;&lt;/li&gt;&lt;ul&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#golden-era-1998-2011&quot;&gt;Golden era: 1998-2011&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#middle-age-issues-2012-2020&quot;&gt;Middle-age issues: 2012-2020&lt;/a&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#evolution-2021-2024-and-the-future&quot;&gt;Evolution: 2021-2024 and the future&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;#conclusions&quot;&gt;Conclusions&lt;/a&gt;&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;&lt;a name=&quot;goals-of-standardization&quot;&gt;Goals of standardization&lt;/a&gt;&lt;br /&gt;&lt;/b&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Standardization, in a form resembling our contemporary practices, began in the Industrial Revolution as a means to harmonize incipient mass production and their associated supply chains through the concept of &lt;i&gt;interchangeability of parts&lt;/i&gt;. Some early technical standards are the &lt;a href=&quot;https://en.wikipedia.org/wiki/Gribeauval_system&quot;&gt;Gribeauval system&lt;/a&gt; (1765, artillery pieces) and the &lt;a href=&quot;https://en.wikipedia.org/wiki/British_Standard_Whitworth&quot;&gt;British Standard Whitworth&lt;/a&gt; (1841, screw threads). &lt;a href=&quot;https://en.wikipedia.org/wiki/Scientific_management&quot;&gt;Taylorism&lt;/a&gt; expanded standardization efforts from machinery to assembly processes themselves with the goal of increasing productivity (and, it could be said, achieving interchangeability of workers). Standards for metric systems, such as that of &lt;a href=&quot;https://en.wikipedia.org/wiki/History_of_the_metric_system#Implementation_in_Revolutionary_France&quot;&gt;Revolutionary France&lt;/a&gt; (1791) were deemed &quot;scientific&quot; (as befitted the enlightenment spirit of the era) in that they were defined by exact, reproducible methods, but their main motivation was to &lt;a href=&quot;https://mjp.univ-perp.fr/france/1793mesures.htm&quot;&gt;facilitate local and international trade&lt;/a&gt; rather than support the advancement of science. We see a common theme here: standardization normalizes or leverages technology to favor industry and trade, that is, technology &lt;i&gt;precedes&lt;/i&gt; standards.&lt;/p&gt;&lt;p&gt;This approach is embraced by 20th century standards organizations (&lt;a href=&quot;https://www.din.de/en&quot;&gt;DIN&lt;/a&gt; 1917, &lt;a href=&quot;https://www.ansi.org/&quot;&gt;ANSI&lt;/a&gt; 1918, &lt;a href=&quot;https://www.iso.org/home.html&quot;&gt;ISO&lt;/a&gt; 1947) through the advent of electronics, telecommunications and IT, and up to our days. Technological advancement, or, more generally, &lt;i&gt;innovation&lt;/i&gt; (a concept &lt;a href=&quot;https://en.wikipedia.org/wiki/Innovation#History&quot;&gt;coined around 1940&lt;/a&gt; and ubiquitous today) is not seen as the focus of standardization, even though standards can promote innovation by &lt;i&gt;consolidating&lt;/i&gt; advancements and best practices upon which further cycles of innovation can be built &lt;span&gt;—and potentially be standardized in their turn&lt;/span&gt;. This interplay between standardization and innovation has been discussed extensively &lt;a href=&quot;https://www.iso.org/files/live/sites/isoorg/files/store/en/PUB100404.pdf&quot;&gt;within standards organizations&lt;/a&gt; and outside. The old term &quot;interchangeability of parts&quot; has been replaced today by the more abstract concepts of &lt;i&gt;compatibility&lt;/i&gt;, &lt;i&gt;interoperability&lt;/i&gt; and (within the realm of IT) &lt;i&gt;portability&lt;/i&gt;.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;&lt;a name=&quot;standardizing-programming-languages&quot;&gt;Standardizing programming languages&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Most programming languages are not officially standardized, but &lt;a href=&quot; https://en.wikipedia.org/wiki/Comparison_of_programming_languages&quot;&gt;some are&lt;/a&gt;. As of today, these are the ISO-standardized languages actively maintained by dedicated &lt;i&gt;working groups&lt;/i&gt; within the &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/&quot;&gt;ISO/IEC JTC1/SC22&lt;/a&gt; subcommittee for programming languages:&lt;br /&gt;&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;COBOL (WG4)&lt;/li&gt;&lt;li&gt;Fortran (WG5)&lt;/li&gt;&lt;li&gt;Ada (WG9)&lt;/li&gt;&lt;li&gt;C (WG14)&lt;/li&gt;&lt;li&gt;Prolog (WG17)&lt;/li&gt;&lt;li&gt;C++ (&lt;a href=&quot;https://www.open-std.org/JTC1/SC22/WG21/&quot;&gt;WG21&lt;/a&gt;)&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;What&#39;s the purpose of standardizing a programming language? JC22 has a sort of &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/docs/portability.html&quot;&gt;foundational paper&lt;/a&gt; which centers on the benefits of portability, understood as both &lt;i&gt;portability across systems/environments&lt;/i&gt; and &lt;i&gt;portability of people&lt;/i&gt; (a rather blunt allusion to old-school Taylorism). The paper does not mention the subject of &lt;i&gt;implementation certification&lt;/i&gt;, which can play a significant role for languages such as Ada that are used in heavily regulated sectors. More importantly to our discussion, it does not either mention what position SC22 holds with respect to innovation: regardless, we will see that innovation does indeed happen within SC22 workgroups, in what represents a radical departure from classical standardization practices.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;&lt;a name=&quot;wg21&quot;&gt;WG21&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;C++ was mostly a &lt;a href=&quot;https://www.stroustrup.com/hopl2.pdf&quot;&gt;one man&#39;s effort&lt;/a&gt; since its inception in the early 80s until the publication of &lt;a href=&quot;https://www.stroustrup.com/arm.html&quot;&gt;&lt;i&gt;The Annotated C++ Reference Manual&lt;/i&gt;&lt;/a&gt; (ARM, 1990), which served as the basis for the creation of an ANSI/ISO standardization committee that would eventually release its first C++ standard in 1998. Bjarne Stroustrup cited &lt;a href=&quot;https://www.stroustrup.com/01chinese.html&quot;&gt;avoidance of compiler vendor lock-in&lt;/a&gt; (a variant of portability) as a major reason for having the language standardized &lt;span&gt;—a concern that made much sense in a scene then dominated by company-owned languages such as Java&lt;/span&gt;.&lt;/p&gt;&lt;p&gt;Innovation was seen as WG21&#39;s business from its very beginning: some features of the core language, such as templates and exceptions, were labeled as experimental in the ARM, and the first version of the standard library, notably including Alexander Stepanov&#39;s &lt;a href=&quot;http://stepanovpapers.com/history%20of%20STL.pdf&quot;&gt;STL&lt;/a&gt;, was introduced by the committee in the 1990-1998 period with little or no field experience. After a minor update to C++98 in 2003, the innovation pace picked up again in subsequent revisions of the standard (2011, 2014, 2017, 2020, 2023), and the current innovation backlog does not seem to falter; if anything, we could say that the main blocker for innovation within the standard is lack of human resources in WG21 rather than lack of proposals.&lt;/p&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;a name=&quot;innovation-vs-adoption&quot;&gt;Innovation vs. adoption&lt;/a&gt;&lt;br /&gt;&lt;/b&gt;&lt;/h4&gt;&lt;p&gt;Not all new features in the C++ standard have originated within WG21. We must distinguish here between the &lt;i&gt;core language&lt;/i&gt; and the &lt;i&gt;standard library&lt;/i&gt;:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;External innovation in the core language is generally hard as it requires writing or modifying a C++ compiler, a task outside the capabilities of many even though this has been made much more accessible with the emergence of open-source, extensible compiler frameworks such as &lt;a href=&quot;https://en.wikipedia.org/wiki/LLVM&quot;&gt;LLVM&lt;/a&gt;. As a result, most innovation activity here happens within WG21, with some notable exceptions like &lt;a href=&quot;https://www.circle-lang.org/&quot;&gt;Circle&lt;/a&gt; and &lt;a href=&quot;https://hsutter.github.io/cppfront/&quot;&gt;Cpp2&lt;/a&gt;. Others have chosen to depart from the C++ language completely (&lt;a href=&quot;https://github.com/carbon-language/carbon-lang/blob/trunk/README.md&quot;&gt;Carbon&lt;/a&gt;, &lt;a href=&quot;https://www.hylo-lang.org/&quot;&gt;Hylo&lt;/a&gt;), so their potential impact on C++ standardization is remote at best.&lt;/li&gt;&lt;li&gt;As for the standard library, the situation is more varied. These are some examples:&lt;br /&gt;&lt;/li&gt;&lt;ul&gt;&lt;li&gt;Straight into the standard: &amp;lt;locale&amp;gt;.&lt;br /&gt;&lt;/li&gt;&lt;li&gt;Straight into the standard, some prior art:&amp;nbsp;IOStreams, &lt;a href=&quot;http://stepanovpapers.com/history%20of%20STL.pdf&quot;&gt;STL&lt;/a&gt;, &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1456.html&quot;&gt;unordered associative containers&lt;/a&gt;. &lt;br /&gt;&lt;/li&gt;&lt;li&gt;Straight into the standard, extensive prior art: &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n3921.html&quot;&gt;std::string_view&lt;/a&gt;.&lt;br /&gt;&lt;/li&gt;&lt;li&gt;Explicitly written for standardization, one reference implementation: &lt;a href=&quot;https://github.com/kokkos/mdspan/&quot;&gt;mdspan&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;Explicitly written for standardization, various reference implementations: &lt;a href=&quot;https://github.com/CaseyCarter/cmcstl2&quot;&gt;ranges&lt;/a&gt;, &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p0323r11.html#use&quot;&gt;std::expected&lt;/a&gt;.&lt;br /&gt;&lt;/li&gt;&lt;li&gt;Moderate to high level of prior field experience: &lt;a href=&quot;https://www.boost.org/libs/filesystem&quot;&gt;Boost.Filesystem&lt;/a&gt;, &lt;a href=&quot;https://www.boost.org/libs/regex&quot;&gt;Boost.Regex&lt;/a&gt;, &lt;a href=&quot;https://www.boost.org/libs/smart_ptr&quot;&gt;Boost.SmartPtr&lt;/a&gt;, &lt;a href=&quot;https://www.boost.org/doc/html/thread.html&quot;&gt;Boost.Thread&lt;/a&gt;, &lt;a href=&quot;https://www.boost.org/libs/tuple&quot;&gt;Boost.Tuple&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;Not initially intended for standardization, high level of prior field experience: &lt;a href=&quot;https://fmt.dev/latest/index.html&quot;&gt;{fmt}&lt;/a&gt;.&lt;/li&gt;&lt;/ul&gt;&lt;/ul&gt;&lt;div style=&quot;margin-left: 40px; text-align: left;&quot;&gt;In general, the trend for the evolution of the standard library seems to be towards proposing new components straight into the standard with very little field experience.&lt;/div&gt;&lt;div&gt;&lt;div&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;a name=&quot;pros-and-cons-of-standardization&quot;&gt;Pros and cons of standardization&lt;/a&gt;&lt;br /&gt;&lt;/b&gt;&lt;/h4&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;The history of C++ standardization has met with some resounding successes (STL, templates, concurrency, most vocabulary types) as well as failures (&lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3065.html&quot;&gt;exported templates&lt;/a&gt;, &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p2186r0.html&quot;&gt;GC support&lt;/a&gt;, &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0003r4.html&quot;&gt;exception specifications&lt;/a&gt;, &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4168&quot;&gt;std::auto_ptr&lt;/a&gt;)&amp;nbsp; and in-between scenarios (&lt;a href=&quot;https://www.google.com/search?client=firefox-b-d&amp;amp;q=std%3A%3Aregex+very+slow&quot;&gt;std::regex&lt;/a&gt;,&amp;nbsp; &lt;a href=&quot;https://www.youtube.com/watch?v=49ZYW4gHBIQ&quot;&gt;ranges&lt;/a&gt;).&lt;/p&gt;&lt;p&gt;Focusing on the standard library, we can identify benefits of standardization vs. having a separate, non-standard component:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;The level of exposure to C++ users increases dramatically. Some companies have bans on the usage of external libraries, and even if no bans are in place, consuming the standard library is much more convenient than having to manage external dependencies &lt;span&gt;—though this &lt;a href=&quot;https://conan.io/&quot;&gt;is&lt;/a&gt; &lt;a href=&quot;https://vcpkg.io/en/&quot;&gt;changing&lt;/a&gt;.&lt;br /&gt;&lt;/span&gt;&lt;/li&gt;&lt;li&gt;Standardization ensures a high level of (system) portability, potentially beyond the reach of external library authors without access to exotic environments.&lt;/li&gt;&lt;li&gt;For components with high interoperability potential (think vocabulary types), having them in the standard library guarantees that they become the tool of choice for API-level module integration.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;But there are drawbacks as well that must be taken into consideration:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;The evolution of a library halts or reduces significantly once it is standardized. One major factor for this is WG21&#39;s self-imposed restriction to preserve backwards compatibility, and in particular ABI compatibility. For example:&lt;br /&gt;&lt;/li&gt;&lt;ul&gt;&lt;li&gt;Defects on the API of std::function had to be fixed by adding a new &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2548r5.pdf&quot;&gt;std::copyable_function&lt;/a&gt; component.&lt;/li&gt;&lt;li&gt;Unordered associative containers were specified with the explicit assumption that they should be based on a technique known as &lt;i&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Hash_table#Separate_chaining&quot;&gt;separate chaining/closed addressing&lt;/a&gt;&lt;/i&gt;. The state of the art in this area has evolved spectacularly since 2003, and modern hash table implementations mostly use &lt;a href=&quot;https://en.wikipedia.org/wiki/Open_addressing&quot;&gt;open addressing&lt;/a&gt;, which is de facto forbidden by the standard API, and &lt;a href=&quot;https://martin.ankerl.com/2022/08/27/hashmap-bench-01/&quot;&gt;outperform std::unordered_(map|set)&lt;/a&gt; by a large factor.&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/ul&gt;&lt;div style=&quot;margin-left: 40px; text-align: left;&quot;&gt;Another factor contributing to library freeze may be the lack of motivation from the authors once they succeed in getting their proposals accepted, as the process involved is very demanding and can last for years.&lt;/div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Some libraries cover specialized domains that standard library implementors cannot be expected to master. Some cases in point:&lt;/li&gt;&lt;ul&gt;&lt;li&gt;Current implementations of std::regex are notoriously slower than Boost.Regex, a situation aggravated by the need to keep ABI compatibility.&lt;/li&gt;&lt;li&gt;Correct and efficient implementations of &lt;a href=&quot;https://en.cppreference.com/w/cpp/numeric/special_functions&quot;&gt;mathematical special functions&lt;/a&gt; require ample expertise in the area of numerical computation. As a result, Microsoft standard library implements these as mere &lt;a href=&quot;https://github.com/microsoft/STL/blob/8dc4faadafb52e3e0a627e046b41258032d9bc6a/stl/src/special_math.cpp#L25-L38&quot;&gt;wrappers over Boost.Math&lt;/a&gt;, and libc++ seems to be &lt;a href=&quot;https://reviews.llvm.org/D142806&quot;&gt;following suit&lt;/a&gt;. This is technically valid, but begs the question of what the standardization of these functions was useful for to begin with.&lt;/li&gt;&lt;/ul&gt;&lt;li&gt;Additions to the upcoming standard (as of this writing, C++26) don&#39;t benefit users immediately because the community typically &lt;a href=&quot;https://blog.jetbrains.com/clion/2024/01/the-cpp-ecosystem-in-2023/&quot;&gt;lags behind&lt;/a&gt; by two or three revisions of the language.&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;So, standardizing a library component is not always the best course of action for the benefit of current and future users of that component. Back in 2001, Stroustrup &lt;a href=&quot;https://www.stroustrup.com/01chinese.html&quot;&gt;remarked&lt;/a&gt; that &lt;i&gt;&quot;[p]eople sometime forget that a library doesn&#39;t have to be part of the
standard to be useful&quot;&lt;/i&gt;, but, to this day, WG21 does not seem to have formal guidelines as to what constitutes a worthy addition to the standard, or how to engage with the community in a world of ever-expanding and more accessible external libraries. We would like to contribute some modest ideas in that direction.&lt;/p&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;&lt;a name=&quot;an-assessment-model-for-library-standardization&quot;&gt;An assessment model for library standardization&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;&lt;/h2&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;Going back to the basic principles of standards, the main benefits to be derived from standardizing a technology (in our case, a C++ library) are connected to higher compatibility and interoperability as a means to increase overall productivity (assumedly correlated to the level of usage of the library within the community). Leaving aside for the moment the size of the potential target audience, we identify two characteristics of a given library that make it suitable for standardization:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Its &lt;i&gt;portability requirements&lt;/i&gt;, defined as the level of coupling that an optimal implementation has with the underlying OS, CPU architecture, etc. The higher these requirements the more sense it makes to include the library as a mandatory part of the standard.&lt;/li&gt;&lt;li&gt;Its &lt;i&gt;interoperability potential&lt;/i&gt;, that is, how much the library is expected to be used as part of public APIs interconnecting different program modules vs. as a private implementation detail. A library with high interoperability potential is maximally useful when included in the common software &quot;stack&quot; shared by the community.&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;So, the &lt;i&gt;baseline standardization value&lt;/i&gt; of a library, denoted &lt;i&gt;V&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt;, can be modeled as:&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;i&gt;V&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt; = &lt;i&gt;aP&lt;/i&gt; + &lt;i&gt;bI&lt;/i&gt;,&lt;br /&gt;&lt;/p&gt;&lt;p&gt;where &lt;i&gt;P&lt;/i&gt; denotes the library&#39;s portability requirements and &lt;i&gt;I&lt;/i&gt; its interoperability potential. The figure shows the baseline standardization value of some library domains within the &lt;i&gt;P&lt;/i&gt;-&lt;i&gt;I&lt;/i&gt; plane. The color red indicates that this value is low, green that it is high.&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgS4iwAu_46IZicixe5x432U5kobKLk5APlmJUqCg2TRWfZ0TGWhSSCVd82wrIQFSB6YPGQ5lZU_0y87F5Zt6j0tRx-bC3MsmopAn1qV_KyOi-IZXZWUoHL4TwuMlvKDGN-MySHxkGmnb-6MEjWstC75CbQiZOIMFann7KrQt3QB7UQ-gS_GwDMPhBClGk/s16000/portability_interoperability.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;532&quot; data-original-width=&quot;692&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgS4iwAu_46IZicixe5x432U5kobKLk5APlmJUqCg2TRWfZ0TGWhSSCVd82wrIQFSB6YPGQ5lZU_0y87F5Zt6j0tRx-bC3MsmopAn1qV_KyOi-IZXZWUoHL4TwuMlvKDGN-MySHxkGmnb-6MEjWstC75CbQiZOIMFann7KrQt3QB7UQ-gS_GwDMPhBClGk/s1600/portability_interoperability.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;A low baseline standardization value for a library does not mean that the library is not useful, but rather that there is little gain to be obtained from standardizing it as opposed to making it available externally. The locations of the exemplified domains in the &lt;i&gt;P&lt;/i&gt;-&lt;i&gt;I&lt;/i&gt; plane reflect the author&#39;s estimation and may differ from that of the reader.&lt;br /&gt;&lt;/p&gt;&lt;p&gt;Now, we have seen that the adoption of a library requires some prior field experience, defined as&lt;/p&gt;&lt;p style=&quot;margin-left: 40px; text-align: center;&quot;&gt;&lt;i&gt;E&lt;/i&gt; = &lt;i&gt;T&lt;/i&gt;·&lt;i&gt;U&lt;/i&gt;,&lt;br /&gt;&lt;/p&gt;&lt;p&gt;where &lt;i&gt;T&lt;/i&gt; is the age of the library and &lt;i&gt;U&lt;/i&gt; is average number of users.&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;When &lt;i&gt;E&lt;/i&gt; is very low, the library is not mature enough and standardizing it can result in a defective design that will be much harder to fix within the standard going forward; this effectively decreases the net value of standardization.&lt;br /&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;On the contrary, if &lt;i&gt;E&lt;/i&gt; is very high, which is correlated to the library having already reached its maximum target audience, the benefits of standardization are vanishingly small: &lt;span&gt;most people are already using the library and including it into the official standard has little value added &lt;/span&gt;&lt;span&gt;—the library has become a &lt;i&gt;de facto&lt;/i&gt; standard.&lt;br /&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;text-align: left;&quot;&gt;So, we may expect to attain an optimum standardization opportunity &lt;i&gt;S&lt;/i&gt; between the extremes &lt;i&gt;E&lt;/i&gt; = 0 and &lt;i&gt;E&lt;/i&gt;&lt;sub&gt;max&lt;/sub&gt;.&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1kKwO0zTGArNNgpEA-RT8kSdFdCwFPl8MuS7dMRG0uVymdT2lIjKX_E7BiW4JILeu1bpAY5uW0J_8n7f2GRjaxl08fa6Thx5akzpGk8fKjB7ThaYwVj30oqH3yE6dFf8Fpg0V9ZnAdaFEJWMfRRwYiwOFUon8TuCmvxS3Bf6hkuylJOOxnOiV1ksuaLg/s1600/standardization_opportunity.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;420&quot; data-original-width=&quot;757&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi1kKwO0zTGArNNgpEA-RT8kSdFdCwFPl8MuS7dMRG0uVymdT2lIjKX_E7BiW4JILeu1bpAY5uW0J_8n7f2GRjaxl08fa6Thx5akzpGk8fKjB7ThaYwVj30oqH3yE6dFf8Fpg0V9ZnAdaFEJWMfRRwYiwOFUon8TuCmvxS3Bf6hkuylJOOxnOiV1ksuaLg/s1600/standardization_opportunity.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Finally, the &lt;i&gt;net standardization value&lt;/i&gt; of a library is defined as&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;i&gt;V&lt;/i&gt; = &lt;i&gt;V&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt;·&lt;i&gt;S&lt;/i&gt;·&lt;i&gt;U&lt;/i&gt;&lt;sub&gt;max&lt;/sub&gt;,&lt;br /&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;where&amp;nbsp;&lt;i&gt;U&lt;/i&gt;&lt;sub&gt;max&lt;/sub&gt; is the library&#39;s maximum target audience. Being a conceptual model, the purpose of this framework is not so much to establish a precise evaluation formula as to help stakeholders raise the right questions when considering a library for standardization:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;How high are the library&#39;s portability requirements?&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;How high its interoperability potential?&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Is it too immature yet? Does it have actual field experience?&lt;br /&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Or, on the contrary, has it already reached its maximum target audience?&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;How big is this audience?&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;&lt;a name=&quot;boost-the-standard-and-beyond&quot;&gt;Boost, the standard and beyond&lt;/a&gt;&lt;/b&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://www.boost.org/&quot;&gt;Boost&lt;/a&gt; was &lt;a href=&quot;https://www.boost.org/users/proposal.pdf&quot;&gt;launched in 1998&lt;/a&gt; upon the idea that &lt;i&gt;&quot;[a] world-wide web site containing a repository of free C++ class libraries would be of great benefit to the C++ community&quot;&lt;/i&gt;. Serving as a venue for future standardization was mentioned only as a secondary goal, yet very soon many saw the project as a launching pad towards the standard library, a perception that has changed since. We analyze the different stages of this 25+-year-old project in connection with its contributions to the standard and to the community. &lt;br /&gt;&lt;/p&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;a name=&quot;golden-era-1998-2011&quot;&gt;Golden era: 1998-2011&lt;/a&gt;&lt;/b&gt;&lt;/h4&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;In its first 14 years of existence, the project grew from 0 to &lt;a href=&quot;https://www.boost.org/doc/libs/1_48_0/&quot;&gt;113 libraries&lt;/a&gt;, for a total &lt;a href=&quot;https://sourceforge.net/projects/boost/files/boost/1.48.0/boost_1_48_0.7z/download&quot;&gt;uncompressed size&lt;/a&gt; of 324 MB. Out of these 113 libraries, 12 would later be included in C++11, typically with modifications (Array, Bind, Chrono, EnableIf, Function, Random, Ref, Regex, SmartPtr, Thread, Tuple, TypeTraits); it may be noted that, even at this initial stage, most Boost libraries were not standardized or meant for standardization. From the point of view of the C++ standard library, however, Boost was the first contributor by far. We may venture some reasons for this success:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;There was much low-hanging fruit in the form of small vocabulary types and obvious utilities.&lt;br /&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Maybe due to a combination of scarce competition and sheer luck, Boost positioned itself very quickly as the go-to place for contributing and consuming high-quality C++ libraries. This ensured a great deal of field experience with the project.&lt;br /&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Many of the authors of the most relevant libraries were also prominent figures within the C++ community and WG21.&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;a name=&quot;middle-age-issues-2012-2020&quot;&gt;Middle-age issues: 2012-2020&lt;/a&gt;&lt;/b&gt;&lt;/h4&gt;&lt;p style=&quot;text-align: left;&quot;&gt;By 2020, Boost had reached &lt;a href=&quot;https://www.boost.org/doc/libs/1_75_0/&quot;&gt;164 libraries&lt;/a&gt; totaling 717 MB in &lt;a href=&quot;https://boostorg.jfrog.io/artifactory/main/release/1.75.0/source/&quot;&gt;uncompressed size&lt;/a&gt; (so, the size of the average library, including source, tests and documentation, grew by 1.5 with respect to 2011). Five Boost libraries were standardized between C++14 and C++20 (Any, Filesystem, Math/Special Functions, Optional, Variant): all of these, however, were already in existence before 2012, so the rate of successful new contributions from Boost to the standard decreased effectively to zero in this period. There were some additional unsuccessful proposals (Mp11).&lt;br /&gt;&lt;/p&gt;&lt;/div&gt;&lt;p style=&quot;text-align: left;&quot;&gt;The transition of Boost from the initial ramp-up to a more mature stage met with several scale problems that impacted negatively the public perception of the project (and, to some extent that we haven&#39;t able to determine, its level of usage). Of particular interest is a &lt;a href=&quot;https://www.reddit.com/r/cpp/comments/gfowpq/why_you_dont_use_boost/&quot;&gt;public discussion&lt;/a&gt; that took place in 2022 on Reddit and touched on several issues more or less recognized within the community of Boost authors:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;The default/advertised way to consume Boost as a monolithic download introduces a bulky, hard to manage dependency on projects.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;&lt;a href=&quot;https://www.bfgroup.xyz/b2/&quot;&gt;B2&lt;/a&gt;, Boost&#39;s native build technology, is unfamiliar to users more accustomed to widespread tools such as &lt;a href=&quot;https://cmake.org/&quot;&gt;CMake&lt;/a&gt;.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Individual Boost libraries are perceived as bloated in terms of size, internal dependences and compile times. Alternative competing libraries are self-contained, easier to install and smaller as they rely on newer versions of the C++ standard.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Many useful components are already provided by the standard library.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;There are great differences between libraries in terms of their quality; some libraries are all but abandoned.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Documentation is not good enough, in particular if compared to &lt;a href=&quot;https://en.cppreference.com/w/&quot;&gt;cppreference.com&lt;/a&gt;, which is regarded as the golden standard in this area.&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;text-align: left;&quot;&gt;A deeper analysis reveals some root causes for this state of affairs:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Overall, the Boost project is very conservative and strives not to break users&#39; code on each version upgrade (even though, unlike the standard, backwards API/ABI compatibility is not guaranteed). In particular, many Boost authors are reluctant to increase the minimum C++ standard version required for their libraries. Also, there is no mechanism in place to retire libraries from the project.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Supporting older versions of the C++ standard locks in some libraries with suboptimal internal dependencies, the most infamous being &lt;a href=&quot;https://www.boost.org/libs/mpl&quot;&gt;Boost.MPL&lt;/a&gt;, which many identify (with or without reason) as responsible for long compile times and cryptic error messages.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Boost&#39;s distribution and build mechanisms were invented in an era where package managers and build systems were not prevalent. This works well for smaller footprints but presents scaling problems that were not foreseen at the beginning of the project.&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Ultimately, Boost is a federation of libraries with different authors and sensibilities. This fact accounts for the various levels of documentation quality, user support, maintenance, etc.&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Some of these characteristics are not negative per se, and have in fact resulted in an extremely durable and available service to the C++ community that some may mistakenly take for granted. Supporting &quot;legacy C++&quot; users is, by definition, neglected by WG21, and maintaining libraries that were already standardized is of great value to those who don&#39;t live on the edge (and, in the case of the std::regex fiasco, those who do). Confronted with the choice of serving the community today vs. tomorrow (via standardization proposals), the Boost project took, perhaps unplannedly, the first option. This is not to say that all is good with the Boost project, as many of the problems found in 2012-2020 are strictly operational.&lt;br /&gt;&lt;/p&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;h4 style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;a name=&quot;evolution-2021-2024-and-the-future&quot;&gt;Evolution: 2021-2024 and the future&lt;/a&gt;&lt;br /&gt;&lt;/b&gt;&lt;/h4&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Boost 1.85 (April 2024) contains &lt;a href=&quot;https://www.boost.org/doc/libs/1_85_0/&quot;&gt;176 libraries&lt;/a&gt; (7% increase with respect to 2020) and has a &lt;a href=&quot;https://boostorg.jfrog.io/artifactory/main/release/1.85.0/source/&quot;&gt;size&lt;/a&gt; of 731 MB (2% increase). Only one Boost component has partially contributed to the C++23 standard library (&lt;a href=&quot;https://www.boost.org/doc/html/container/non_standard_containers.html#container.non_standard_containers.flat_xxx&quot;&gt;boost::container::flat_map&lt;/a&gt;), though there has been some unsuccessful proposals (the most notable being Boost.Asio).&lt;/p&gt;&lt;p style=&quot;text-align: left;&quot;&gt;In response to the operational problems we have described before, some authors have embarked on a number of improvement and modernization tasks:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Beginning in Boost 1.82 (Apr 2023), some core libraries announced the upcoming &lt;a href=&quot;https://www.boost.org/users/history/version_1_82_0.html#version_1_82_0.notice_of_dropping_c_03_support &quot;&gt;abandonment of C++03 support&lt;/a&gt; as part of a &lt;a href=&quot;https://pdimov.github.io/articles/phasing_out_cxx03.html&quot;&gt;plan&lt;/a&gt; to reduce code base sizes, maintenance costs, and internal dependencies on &quot;polyfill&quot; components. This initiative has a cascading effect on dependent libraries that is still ongoing.&lt;br /&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Alongside C++03 support drop, many libraries have been updated to reduce the number of internal dependencies (that even were, in some cases, cyclic). The figure shows the cumulative histograms of the number of dependencies for Boost libraries in versions 1.66 (2017), 1.75 (2020) and 1.85 (2024): &lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie02lzbCQ4y-f1OhZKem4XAqL0Zo4H86qWR8EBQANJQotzIi0rzaALOaYIHOA0MzICdcD-Jv5_G_t4mxTgLMynltGlNA5hcUneWCoiSXU_-xtbbLfuCW0OdsaMPoQI-VMqL8KTGIt95yJQicNJTTECHjLneZVAZtnSk7ooFB0imI0N2_JziszL-KRqEw8/s1600/dependency_histograms.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;479&quot; data-original-width=&quot;725&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEie02lzbCQ4y-f1OhZKem4XAqL0Zo4H86qWR8EBQANJQotzIi0rzaALOaYIHOA0MzICdcD-Jv5_G_t4mxTgLMynltGlNA5hcUneWCoiSXU_-xtbbLfuCW0OdsaMPoQI-VMqL8KTGIt95yJQicNJTTECHjLneZVAZtnSk7ooFB0imI0N2_JziszL-KRqEw8/s1600/dependency_histograms.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;/div&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Official &lt;a href=&quot;https://github.com/boostorg/cmake&quot;&gt;CMake support&lt;/a&gt; for the entire Boost project was announced in Oct 2023. This support also allows for downloading and building of individual libraries (and their dependencies).&lt;br /&gt;&lt;/li&gt;&lt;li&gt;On the same front of &lt;i&gt;modular consumption&lt;/i&gt;, there is work in progress to &lt;a href=&quot;https://github.com/grafikrobot/boost-b2-modular&quot;&gt;modularize B2-based library builds&lt;/a&gt;, which will enable package managers such as &lt;a href=&quot;https://conan.io/&quot;&gt;Conan&lt;/a&gt; to offer Boost libraries individually. &lt;a href=&quot;https://vcpkg.io/en/&quot;&gt;vcpkg&lt;/a&gt; &lt;a href=&quot;https://learn.microsoft.com/en-us/vcpkg/consume/boost-versions&quot;&gt;already offers this option&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;Starting in July 2023, boost.org includes a search widget indexing the documentation of all libraries. The ongoing &lt;a href=&quot;https://github.com/cppalliance/mrdocs&quot;&gt;MrDocs&lt;/a&gt; project seeks to provide a Doxygen-like tool for automatic C++ documentation generation that could eventually support Boost authors&amp;nbsp; &lt;span&gt;—library docs are currently written more or less manually in a plethora of languages such as raw HTML, &lt;a href=&quot;https://www.boost.org/doc/html/quickbook.html&quot;&gt;Quickbook&lt;/a&gt;, &lt;a href=&quot;https://asciidoc.org/&quot;&gt;Asciidoc&lt;/a&gt;, etc. There is a new Boost website &lt;a href=&quot;https://preview.boost.org&quot;&gt;in the works&lt;/a&gt; scheduled for launch during mid-2024.&lt;br /&gt;&lt;/span&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;text-align: left;&quot;&gt;Where is Boost headed? It must be stressed again that the project is a federation of authors without a central governing authority in strategic matters, so the following should be taken as an interpretation of detected current trends:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li style=&quot;text-align: left;&quot;&gt;Most of the &lt;a href=&quot;https://www.boost.org/community/review_schedule.html&quot;&gt;recently added libraries&lt;/a&gt; cover relatively specific application-level domains (networking/database protocols, parsing) or else provide utilities likely to be superseded by future C++ standards, as is the case with reflection (Describe, PFR). One library is a direct backport of a C++17 standard library component (Charconv). Boost.JSON provides yet another solution in an area already rich with alternatives external to the standard library. Boost.LEAF proposes an approach to error handling radically different to that of the latest standard (std::expected). Boost.Scope implements and augment a WG21 proposal currently on hold (&lt;a href=&quot;https://cplusplus.github.io/fundamentals-ts/v3.html#scopeguard&quot;&gt;&amp;lt;experimental/scope&amp;gt;&lt;/a&gt;).&lt;br /&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;In some cases, standard compatibility has been abandoned to provide faster performance or richer functionality (Container, Unordered, Variant2). &lt;br /&gt;&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;No new library supports C++03, which reduces drastically their number of internal dependencies (except in the case of networking libraries depending on Boost.Asio).&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;On the other hand, most new libraries are still conservative in that they only require C++11/14, with some exceptions (Parser and Redis require C++17, Cobalt requires C++20).&lt;/li&gt;&lt;li style=&quot;text-align: left;&quot;&gt;There are some experimental initiatives like the proposal to serve &lt;a href=&quot;https://anarthal.github.io/cppblog/modules2&quot;&gt;Boost libraries as C++ modules&lt;/a&gt;, which has been met with much interest and support from the Visual Studio team. An important appeal of this idea is that it will allow compiler vendors and the committee to obtain field experience from a large, non-trivial codebase.&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p style=&quot;text-align: left;&quot;&gt;The current focus of Boost seems then to have shifted from standards-bound innovation to higher-level and domain-specific libraries directly available to users of C++11/14 and later. More stress is increasingly being put on maintenance, reduced internal dependencies and modular availability, which further cements the thesis that Boost authors are more concerned about serving the C++ community from Boost itself than eventually migrating to the standard. There is still a flow of ideas from Boost to WG21, but they do not represent the bulk of the project activity.&lt;br /&gt;&lt;/p&gt;&lt;/div&gt;&lt;p style=&quot;text-align: left;&quot;&gt;&lt;/p&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;a name=&quot;conclusions&quot;&gt;Conclusions&lt;/a&gt;&lt;/span&gt;&lt;/h2&gt;&lt;p&gt;Traditionally, the role of standardization has been to consolidate previous innovations that have reached maturity so as to maximize their potential for industry vendors and users. In the very specific case of programming languages, and WG21/LEWG in particular, the standards committee has taken on the role of innovator and is pushing the industry rather than adopting external advancements or coexisting with them. This presents some problems related to lack of field experience, limitations to internal evolution imposed by backwards compatibility and an associated workload that may exceed the capacity of the committee. Thanks to open developer platforms (GitHub, GitLab), widespread build systems (CMake) and package managers (Conan, vcpkg), the world of C++ libraries is richer and more available than ever. WG21 could reconsider its role as part of an ecosystem that thrives outside and alongside its own activity. We have proposed a conceptual evaluation model for standardization of C++ libraries that may help in the conversations around these issues. Boost has shifted its focus from being a primary venue for standardization to serving the C++ community (including users of previous versions of the language) through increasingly modular, high level and domain-specific libraries. Hopefully, the availability and reach of the Boost project will help gain much needed field experience that could eventually lead to further collaborations with and contributions to WG21 in a non-preordained way.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/1527440803497983238/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2024/05/wg21-boost-and-ways-of-standardization.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/1527440803497983238'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/1527440803497983238'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2024/05/wg21-boost-and-ways-of-standardization.html' title='WG21, Boost, and the ways of standardization'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgS4iwAu_46IZicixe5x432U5kobKLk5APlmJUqCg2TRWfZ0TGWhSSCVd82wrIQFSB6YPGQ5lZU_0y87F5Zt6j0tRx-bC3MsmopAn1qV_KyOi-IZXZWUoHL4TwuMlvKDGN-MySHxkGmnb-6MEjWstC75CbQiZOIMFann7KrQt3QB7UQ-gS_GwDMPhBClGk/s72-c/portability_interoperability.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-5419054245994007782</id><published>2024-04-04T13:16:00.003+02:00</published><updated>2024-04-05T10:25:29.686+02:00</updated><title type='text'>A case in API ergonomics for ordered containers</title><content type='html'>&lt;p&gt;Suppose we have a &lt;span style=&quot;font-family: courier;&quot;&gt;std::set&amp;lt;int&amp;gt;&lt;/span&gt; and would like to retrieve the elements between values &lt;span style=&quot;font-family: courier;&quot;&gt;a&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;b&lt;/span&gt;, both inclusive. This task is served by operations &lt;span style=&quot;font-family: courier;&quot;&gt;std::set::lower_bound&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;std::set::upper_bound&lt;/span&gt;:&lt;br /&gt;&lt;/p&gt;

&lt;div&gt;&lt;pre class=&quot;prettyprint&quot;&gt;std::set&amp;lt;int&amp;gt; x=...;&lt;br /&gt;&lt;br /&gt;// elements in [a,b]&lt;br /&gt;auto first = x.lower_bound(a);&lt;br /&gt;auto last  = x.upper_bound(b);&lt;br /&gt; &lt;br /&gt;while(first != last) std::cout&amp;lt;&amp;lt; *first++ &amp;lt;&amp;lt;&quot; &quot;;&lt;br /&gt;&lt;/pre&gt;&lt;/div&gt;

&lt;p&gt;Why do we use&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;lower_bound&lt;/span&gt; for the first iterator and &lt;span style=&quot;font-family: courier;&quot;&gt;upper_bound&lt;/span&gt; for the second? The well-known STL convention is that a range of elements is determined by two iterators &lt;span style=&quot;font-family: courier;&quot;&gt;first&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;last&lt;/span&gt;, where&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;first&lt;/span&gt; points to the first element of the range and &lt;span style=&quot;font-family: courier;&quot;&gt;last&lt;/span&gt; points to &lt;i&gt;the position right after the last element&lt;/i&gt;. This is done so that empty ranges can be handled without special provisions (&lt;span style=&quot;font-family: courier;&quot;&gt;first == last&lt;/span&gt;).&lt;br /&gt;&lt;/p&gt;&lt;p&gt;Now, with this convention in mind and considering that&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;lower_bound(a)&lt;/span&gt; returns an iterator to the first element &lt;i&gt;not less than&lt;/i&gt;&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;a&lt;/span&gt;,&lt;/li&gt;&lt;li&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;upper_bound(b)&lt;/span&gt; returns an iterator to the first element &lt;i&gt;greater than&lt;/i&gt; &lt;span style=&quot;font-family: courier;&quot;&gt;b&lt;/span&gt;,&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;we can convince ourselves that the code above is indeed correct. The situations where one or both of the interval endpoints are not inclusive can also be handled:&lt;/p&gt;&lt;pre class=&quot;prettyprint&quot;&gt;// elements in [a,b)
auto first = x.lower_bound(a);
auto last  = x.lower_bound(b);

// elements in (a,b]
auto first = x.upper_bound(a);
auto last  = x.upper_bound(b);

// elements in (a,b)
auto first = x.upper_bound(a);
auto last  = x.lower_bound(b);&lt;br /&gt;&lt;/pre&gt;&lt;p&gt;but getting them right requires some thinking.&lt;/p&gt;&lt;p&gt;&lt;a href=&quot;https://www.boost.org/libs/multi_index&quot; target=&quot;_blank&quot;&gt;Boost.MultiIndex&lt;/a&gt; introduces the operation &lt;a href=&quot;https://www.boost.org/libs/multi_index/doc/reference/ord_indices.html#range_operations&quot; target=&quot;_blank&quot;&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;range&lt;/span&gt;&lt;/a&gt; to handle this type of queries:&lt;/p&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;keyword&quot;&gt;template&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;keyword&quot;&gt;typename&lt;/span&gt; &lt;span class=&quot;identifier&quot;&gt;LowerBounder&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;keyword&quot;&gt;typename&lt;/span&gt; &lt;span class=&quot;identifier&quot;&gt;UpperBounder&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;&amp;gt;&lt;/span&gt;
&lt;span class=&quot;identifier&quot;&gt;std&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;::&lt;/span&gt;&lt;span class=&quot;identifier&quot;&gt;pair&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;identifier&quot;&gt;iterator&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;identifier&quot;&gt;iterator&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;&amp;gt;&lt;/span&gt;&lt;br /&gt;&lt;span class=&quot;identifier&quot;&gt;range&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;identifier&quot;&gt;LowerBounder&lt;/span&gt; &lt;span class=&quot;identifier&quot;&gt;lower&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;,&lt;/span&gt;&lt;span class=&quot;identifier&quot;&gt; UpperBounder&lt;/span&gt; &lt;span class=&quot;identifier&quot;&gt;upper&lt;/span&gt;&lt;span class=&quot;special&quot;&gt;);&lt;/span&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;lower&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;uppe&lt;/span&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;r&lt;/span&gt; are user-provided predicates that determine whether an element is &lt;i&gt;not to the left&lt;/i&gt; and &lt;i&gt;not to the right&lt;/i&gt; of the considered interval, respectively. The formal specification of &lt;span style=&quot;font-family: courier;&quot;&gt;LowerBounder&lt;/span&gt; and &lt;span style=&quot;font-family: courier;&quot;&gt;Upper&lt;/span&gt;&lt;span style=&quot;font-family: courier;&quot;&gt;Bounder&lt;/span&gt; is quite impenetrable, but using this facility, in particular in combination with &lt;a href=&quot;https://www.boost.org/libs/lambda2&quot; target=&quot;_blank&quot;&gt;Boost.Lambda2&lt;/a&gt;, is actually straightforward:&lt;br /&gt;&lt;/p&gt;&lt;pre class=&quot;prettyprint&quot;&gt;// equivalent to std::set&amp;lt;int&amp;gt;&lt;br /&gt;boost::multi_index_container&amp;lt;int&amp;gt; x=...;&lt;br /&gt;&lt;br /&gt;using namespace boost::lambda2;&lt;br /&gt;&lt;br /&gt;// [a,b]
auto [first, last] = x.range(_1 &amp;gt;= a, _1 &amp;lt;= b);
&lt;br /&gt;// [a,b)
auto [first, last] = x.range(_1 &amp;gt;= a, _1 &amp;lt; b);

// (a,b]
auto [first, last] = x.range(_1 &amp;gt; a,  _1 &amp;lt;= b);

// (a,b)
auto [first, last] = x.range(_1 &amp;gt; a,  _1 &amp;lt; b);&lt;/pre&gt;&lt;p&gt;The resulting code is much easier to read and to get right in the first place, and is also more efficient than two separate calls to &lt;span style=&quot;font-family: courier;&quot;&gt;[lower|upper]_bound&lt;/span&gt; &amp;nbsp; (because the two internal rb-tree top-to-bottom traversals can be partially joined in the implementation of &lt;span style=&quot;font-family: courier;&quot;&gt;range&lt;/span&gt;). Just as importantly,&amp;nbsp;&lt;span style=&quot;font-family: courier;&quot;&gt;range&lt;/span&gt; handles situations such as this:&lt;br /&gt;&lt;/p&gt;&lt;pre class=&quot;prettyprint&quot;&gt;int a = 5;&lt;br /&gt;int b = 2; // note a &amp;gt; b&lt;br /&gt;&lt;br /&gt;// elements in [a,b]&lt;br /&gt;auto first = x.lower_bound(a);&lt;br /&gt;auto last  = x.upper_bound(b);&lt;br /&gt; &lt;br /&gt;// undefined behavior&lt;br /&gt;while(first != last) std::cout&amp;lt;&amp;lt; *first++ &amp;lt;&amp;lt;&quot; &quot;;&lt;/pre&gt;&lt;p&gt;When &lt;span style=&quot;font-family: courier;&quot;&gt;a &amp;gt; b&lt;/span&gt;, &lt;span style=&quot;font-family: courier;&quot;&gt;first&lt;/span&gt; may be strictly to the right of &lt;span style=&quot;font-family: courier;&quot;&gt;last&lt;/span&gt;, and consequently the &lt;span style=&quot;font-family: courier;&quot;&gt;while&lt;/span&gt; loop will crash or never terminate. &lt;span style=&quot;font-family: courier;&quot;&gt;range&lt;/span&gt;, on the other hand, handles the situation gracefully and returns an empty range.&lt;br /&gt;&lt;/p&gt;&lt;p&gt;We have seen an example of how API design can help reduce programming errors and increase efficiency by providing higher-level facilities that &lt;i&gt;model&lt;/i&gt; and &lt;i&gt;encapsulate&lt;/i&gt; scenarios otherwise served by a combination of lower-level operations. It may be interesting to have &lt;span style=&quot;font-family: courier;&quot;&gt;range&lt;/span&gt;-like operations introduced for standard associative containers.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/5419054245994007782/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2024/04/a-case-in-api-ergonomics-for-ordered.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5419054245994007782'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5419054245994007782'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2024/04/a-case-in-api-ergonomics-for-ordered.html' title='A case in API ergonomics for ordered containers'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-792492574681886300</id><published>2023-10-20T18:30:00.020+02:00</published><updated>2023-10-22T19:44:29.653+02:00</updated><title type='text'>Bulk visitation in boost::concurrent_flat_map</title><content type='html'>&lt;p&gt;&lt;/p&gt;&lt;h1 dir=&quot;auto&quot; id=&quot;user-content-bulk-visitation-in-boostconcurrent_flat_map&quot; tabindex=&quot;-1&quot;&gt;&lt;/h1&gt;&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;a href=&quot;#introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#prior-art&quot;&gt;Prior art&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#bulk-visitation-design&quot;&gt;Bulk visitation design&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#performance-analysis&quot;&gt;Performance analysis&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#conclusions-and-next-steps&quot;&gt;Conclusions and next steps&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;h2 dir=&quot;auto&quot; id=&quot;user-content-introduction&quot; tabindex=&quot;-1&quot;&gt;&lt;a name=&quot;introduction&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Introduction&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;a class=&quot;heading-link&quot; href=&quot;https://github.com/joaquintides/bulk_visit_article/tree/main#introduction&quot;&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/unordered/doc/html/unordered.html#concurrent_flat_map&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;&lt;/a&gt;
and its &lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/unordered/doc/html/unordered.html#concurrent_flat_set&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;boost::concurrent_flat_set&lt;/code&gt;&lt;/a&gt;
counterpart are Boost.Unordered&#39;s associative containers for
high-performance concurrent scenarios. These containers dispense with iterators in favor of a
&lt;i&gt;visitation&lt;/i&gt;-based interface:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;boost::concurrent_flat_map&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt;, &lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt;&amp;gt; m;
...
&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; find the element with key k and increment its associated value&lt;/span&gt;
m.visit(k, [](&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; x) {
  ++x.&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt;;
});&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container position-absolute right-0 top-0&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;This design choice was made because visitation is not affected by some inherent
problems afflicting iterators in multithreaded environments.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;Starting in Boost 1.84,
code like the following:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;std::array&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt;, N&amp;gt; keys;
...
&lt;span class=&quot;pl-k&quot;&gt;for&lt;/span&gt;(&lt;span class=&quot;pl-k&quot;&gt;const&lt;/span&gt; &lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; key: keys) {
  m.&lt;span class=&quot;pl-c1&quot;&gt;visit&lt;/span&gt;(key, [](&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; x) { ++x.&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt;; });
}&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container position-absolute right-0 top-0&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;can be written more succintly via the so-called &lt;i&gt;bulk visitation&lt;/i&gt; API:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;m.visit(keys.begin(), keys.end(), [](&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; x) { ++x.&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt;; });&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container position-absolute right-0 top-0&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;As it happens, bulk visitation is not provided merely for syntactic convenience:
this operation is internally optimized so that it performs significantly faster
than the original for-loop. We discuss here the key ideas behind bulk visitation
internal design and analyze its performance.&lt;/p&gt;
&lt;h2 dir=&quot;auto&quot; id=&quot;user-content-prior-art&quot; tabindex=&quot;-1&quot;&gt;&lt;a name=&quot;prior-art&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Prior art&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p dir=&quot;auto&quot;&gt;In their paper
&lt;a href=&quot;https://dl.acm.org/doi/pdf/10.1145/3552326.3587457&quot; rel=&quot;nofollow&quot;&gt;&quot;DRAMHiT: A Hash Table Architected for the Speed of DRAM&quot;&lt;/a&gt;,
Narayanan et al. explore some optimization techniques from the domain of
distributed system as translated to concurrent hash tables running on modern
multi-core architectures with hierarchical caches. In particular, they note
that cache misses can be avoided by batching requests to the hash table, prefetching
the memory positions required by those requests and then completing the operations
asynchronously when enough time has passed for the data to be effectively
retrieved. Our bulk visitation implementation draws inspiration from this
technique, although in our case visitation is fully synchronous and in-order,
and it is the responsibility of the user to batch keys before calling
the bulk overload of &lt;code&gt;boost::concurrent_flat_map::visit&lt;/code&gt;.&lt;/p&gt;
&lt;h2 dir=&quot;auto&quot; id=&quot;user-content-bulk-visitation-design&quot; tabindex=&quot;-1&quot;&gt;&lt;a name=&quot;bulk-visitation-design&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Bulk visitation design&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;/h2&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijST3J7HxWSPrQwhRoXQBiNXkaZ1Q4bzMNVcgVfevMOGG5iNde5LnNo7dbk8KIc7kPg9FHWMJ0DdYP6seHktxoNevDhkuVTq9I74-cStGjZ3AeueYRf1o4MHcOJPaTwUiqz34nGlnOXnfgMt3XQ_vhK8NfU3R2sKHvTtdXEWzyn3nQiLa8LdD7C2-fRJA/s16000/data_structure.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;351&quot; data-original-width=&quot;935&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijST3J7HxWSPrQwhRoXQBiNXkaZ1Q4bzMNVcgVfevMOGG5iNde5LnNo7dbk8KIc7kPg9FHWMJ0DdYP6seHktxoNevDhkuVTq9I74-cStGjZ3AeueYRf1o4MHcOJPaTwUiqz34nGlnOXnfgMt3XQ_vhK8NfU3R2sKHvTtdXEWzyn3nQiLa8LdD7C2-fRJA/w600/data_structure.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;As discussed in a previous &lt;a href=&quot;https://bannalia.blogspot.com/2023/07/inside-boostconcurrentflatmap.html&quot; rel=&quot;nofollow&quot;&gt;article&lt;/a&gt;,
&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; uses an open-addressing data structure comprising:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;A bucket array split into 2&lt;sup&gt;&lt;i&gt;n&lt;/i&gt;&lt;/sup&gt; groups of &lt;i&gt;N&lt;/i&gt; = 15 slots.&lt;/li&gt;&lt;li&gt;A metadata array associating a 16-byte metadata word to each slot group, used for SIMD-based reduced-hash matching.&lt;/li&gt;&lt;li&gt;An access array with a spinlock (and some additional information) for locked access to each group.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;The happy path for successful visitation looks like this:&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgKWUR150_-IJ8AM1oo_ov5enCDojj0ZCXFL_DLhVC8ZezFngAi4it1nEn9tspznwTl5JoU2MtEzEG1PL-7hlUaoJzwo7UjPwi7dlvhiLN6v9e25xXpjTG58zMp58U9T746qZoFW9Mmi5HlYDvoYA58leZ7-yM9jgH72XgbnWyT6KMbaoSLHuy7wKf9gE/s16000/visit.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;422&quot; data-original-width=&quot;626&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgKWUR150_-IJ8AM1oo_ov5enCDojj0ZCXFL_DLhVC8ZezFngAi4it1nEn9tspznwTl5JoU2MtEzEG1PL-7hlUaoJzwo7UjPwi7dlvhiLN6v9e25xXpjTG58zMp58U9T746qZoFW9Mmi5HlYDvoYA58leZ7-yM9jgH72XgbnWyT6KMbaoSLHuy7wKf9gE/w600/visit.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;ol dir=&quot;auto&quot;&gt;&lt;li&gt;The hash value for the looked-up key and its mapped group position are calculated.&lt;/li&gt;&lt;li&gt;The metadata for the group is retrieved and matched against the hash value.&lt;/li&gt;&lt;li&gt;If the match is positive (which is the case for the &lt;i&gt;happy&lt;/i&gt; path), the group is locked for access
and the element indicated by the matching mask is retrieved and compared with the key. Again, in the
happy path this comparison is positive (the element is found); in the unhappy path, more elements
(within this group or beyond) need be checked.&lt;/li&gt;&lt;/ol&gt;
&lt;p dir=&quot;auto&quot;&gt;(Note that happy &lt;i&gt;unsuccessful&lt;/i&gt; visitation simply terminates at step 2, so we focus our
analysis on successful visitation.) As the diagram shows, the CPU has to
wait for memory retrieval between steps 1 and 2 and between steps 2 and 3 (in the latter case,
retrievals of mutex and element are parallelized through manual prefetching). A key insight
is that, under normal circumstances, these memory accesses will almost always be cache misses:
successive visitation operations, unless for the very same key, won&#39;t have any cache
locality. In bulk visitation, the stages of the algorithm are &lt;i&gt;pipelined&lt;/i&gt; as follows
(the diagram shows the case of three operations in the bulk batch):&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeD4U6u3q2NHFiBm4EHKHwnhpygNIZT8efRg665Qa8PsbAGfM7-nWpZTQP38lhWx7Kr1p5LassPsYeuzakkMtPxdvbsQQs0NNIC3cAaVwlXkk3VDHd5GUMTxsA9OXzToHLAw3hJA4I8Me7dXYXUvG5lAyFzvfN6f_1C-ZAhh-3LlciomyCLtYXM93VEAk/s16000/bulk_visit.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;536&quot; data-original-width=&quot;626&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjeD4U6u3q2NHFiBm4EHKHwnhpygNIZT8efRg665Qa8PsbAGfM7-nWpZTQP38lhWx7Kr1p5LassPsYeuzakkMtPxdvbsQQs0NNIC3cAaVwlXkk3VDHd5GUMTxsA9OXzToHLAw3hJA4I8Me7dXYXUvG5lAyFzvfN6f_1C-ZAhh-3LlciomyCLtYXM93VEAk/w600/bulk_visit.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;The data required at step &lt;i&gt;N&lt;/i&gt;+1 is prefetched at the end of step &lt;i&gt;N&lt;/i&gt;. Now,
if a sufficiently large number of operations are pipelined, we can effectively eliminate
cache miss stalls: all memory addresses will be already cached by the time they are used.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;The operation &lt;code&gt;visit(first, last, f)&lt;/code&gt; internally splits [&lt;code&gt;first&lt;/code&gt;, &lt;code&gt;last&lt;/code&gt;) into chunks
of
&lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/unordered/doc/html/unordered.html#concurrent_flat_map_constants&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;bulk_visit_size&lt;/code&gt;&lt;/a&gt;
elements that are then processed as described above.
This chunk size has to be sufficiently large to give time for memory to be
actually cached at the point of usage. On the upper side, the chunk size is limited
by the number of outstanding memory requests that the CPU can handle at a time: in
Intel architectures, this is limited by the size of the
&lt;a href=&quot;https://cdrdv2-public.intel.com/671488/248966-046A-software-optimization-manual.pdf&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;line fill buffer&lt;/i&gt;&lt;/a&gt;,
typically 10-12. We have empirically confirmed that bulk visitation maxes at around
&lt;code&gt;bulk_visit_size&lt;/code&gt; = 16, and stabilizes beyond that.&lt;/p&gt;
&lt;h2 dir=&quot;auto&quot; id=&quot;user-content-performance-analysis&quot; tabindex=&quot;-1&quot;&gt;&lt;a name=&quot;performance-analysis&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Performance analysis&lt;/span&gt;&lt;/b&gt;&lt;/a&gt;&lt;a class=&quot;heading-link&quot; href=&quot;https://github.com/joaquintides/bulk_visit_article/tree/main#performance-analysis&quot;&gt;&lt;svg aria-hidden=&quot;true&quot; class=&quot;octicon octicon-link&quot; height=&quot;16&quot; viewbox=&quot;0 0 16 16&quot; width=&quot;16&quot;&gt;&lt;/svg&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p dir=&quot;auto&quot;&gt;For our study of bulk visitation performance, we have used a computer with
a &lt;a href=&quot;https://www.7-cpu.com/cpu/Skylake.html&quot; rel=&quot;nofollow&quot;&gt;Skylake&lt;/a&gt;-based Intel Core i5-8265U CPU:&lt;/p&gt;
&lt;table border=&quot;1&quot; style=&quot;border-collapse: collapse; height: 201px; margin-left: auto; margin-right: auto; padding: 0pt; text-align: center; width: 80%;&quot;&gt;
&lt;thead&gt; 
&lt;tr&gt;
&lt;th align=&quot;left&quot;&gt;&lt;br /&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;Size/core&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;Latency [ns]&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot; style=&quot;padding-left: 5px;&quot;&gt;L1 data cache&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;32 KB&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;3.13&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot; style=&quot;padding-left: 5px;&quot;&gt;L2 cache&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;256 KB&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;6.88&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot; style=&quot;padding-left: 5px;&quot;&gt;L3 cache&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;6 MB&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;25.00&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td align=&quot;left&quot; style=&quot;padding-left: 5px;&quot;&gt;DDR4 RAM&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;br /&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;77.25&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p dir=&quot;auto&quot;&gt;We measure the throughput in Mops/sec of single-threaded lookup (50/50 successful/unsuccessful)
for both regular and bulk visitation on a &lt;code&gt;boost::concurrent_flat_map&amp;lt;int, int&amp;gt;&lt;/code&gt; with sizes
&lt;i&gt;N&lt;/i&gt; = 3k, 25k, 600k, and 10M: for the three first values, the container fits entirely into
L1, L2 and L3, respectively. The &lt;a href=&quot;https://github.com/joaquintides/bulk_visit_performance/blob/main/bulk_visit_performance.cpp&quot;&gt;test program&lt;/a&gt;
has been compiled with &lt;code&gt;clang-cl&lt;/code&gt; for Visual Studio 2022 in release mode.&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSAQ1wKDfyS7KD0CqWW3vTTlPdjv2h9zjQI_IMSi85AgUR2frnm40h6zjhKlmwTodv8HFEII5T9G7KeJRE5DmPOS0VFSAGnIhuTQx-baUZjFD9qNZBDbTm588aZ3ydBsYIzrsP3tMb0idW6DIz363VJuU8WlCP0uYB9fQsJJUWe_BibHvV58GszWJo0cQ/s16000/performance.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;446&quot; data-original-width=&quot;714&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSAQ1wKDfyS7KD0CqWW3vTTlPdjv2h9zjQI_IMSi85AgUR2frnm40h6zjhKlmwTodv8HFEII5T9G7KeJRE5DmPOS0VFSAGnIhuTQx-baUZjFD9qNZBDbTm588aZ3ydBsYIzrsP3tMb0idW6DIz363VJuU8WlCP0uYB9fQsJJUWe_BibHvV58GszWJo0cQ/w600/performance.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;As expected, the relative performance of bulk vs. regular visitation grows as
data is fetched from a slower cache (or RAM in the latter case). The theoretical
throughput achievable by bulk visitation has been estimated from regular visitation
by subtracting memory retrieval times as calculated with the following model:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;If the container fits in L&lt;i&gt;n&lt;/i&gt; (L4 = RAM),  L&lt;i&gt;n&lt;/i&gt;−1 is entirely occupied
by metadata and access objects (and some of this data spills over to L&lt;i&gt;n&lt;/i&gt;).&lt;/li&gt;&lt;li&gt;Mutex and element retrieval times (which only apply to successful visitation)
are dominated by the latter.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;Actual and theoretical figures match quite well, which sugggests that the
algorithmic overhead imposed by bulk visitation is negligible.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;We have also run
&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_concurrent_flat_map&quot;&gt;benchmarks&lt;/a&gt;
under conditions more similar to real-life for &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;,
with and without bulk visitation, and other concurrent containers, using different
compilers and architectures. As an example, these are the results for a workload of 50M
insert/lookup mixed operations distributed across several concurrent threads for different
data distributions with Clang 12 on an ARM64 computer:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg66THzURjiy1yHmoRUdbZVLDi57GaDCeA1755_Sx5jdbZVwL-0SyN6rr1dYu6V5icrJPpD9Uw0j5wMd21eGvZRxKjHf4Yu4xFMQSBUaKxVMR_lCz0lQM1HxLIwx5o1VFzya4nbd8twV0C8ucbNhnFdTZi-H86y8dn-wDDPhDoOA2-m941ufa1pPAVPjXY/s698/Parallel%20workload.xlsx.5M,%200.01.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg66THzURjiy1yHmoRUdbZVLDi57GaDCeA1755_Sx5jdbZVwL-0SyN6rr1dYu6V5icrJPpD9Uw0j5wMd21eGvZRxKjHf4Yu4xFMQSBUaKxVMR_lCz0lQM1HxLIwx5o1VFzya4nbd8twV0C8ucbNhnFdTZi-H86y8dn-wDDPhDoOA2-m941ufa1pPAVPjXY/w200-h129/Parallel%20workload.xlsx.5M,%200.01.png&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioy6059L3HFaVpX5dgOJibmTo6NYqahfxfzP-kDsBUMgXroAoyfWX-s8jHRzCe8Yz4-K3JmUeIVPW6hd5GySNWZtPB79vYKhfzHinfNlFsMjN20TSFqpPDV7IEpaeAYtD2A5Wjxmt73A_AGshvF2V2-CCgn-8AmNP18lnpVrTqbIsA1sW0HvVkjBtyMwY/s698/Parallel%20workload.xlsx.5M,%200.5.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioy6059L3HFaVpX5dgOJibmTo6NYqahfxfzP-kDsBUMgXroAoyfWX-s8jHRzCe8Yz4-K3JmUeIVPW6hd5GySNWZtPB79vYKhfzHinfNlFsMjN20TSFqpPDV7IEpaeAYtD2A5Wjxmt73A_AGshvF2V2-CCgn-8AmNP18lnpVrTqbIsA1sW0HvVkjBtyMwY/w200-h129/Parallel%20workload.xlsx.5M,%200.5.png&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjk6rxpKAZw_5IVAVIdzGfBGahg0NLbxctPtUsvExLJPFDleXYAaqePi6a89imdguwfaVXZCXCk7xoMTPKq058eIC69kzbKxhgRNLrPzU-1zt_tcM_wq2oZLOQNrxOdCU9TfniNvt0TW64KnpOydHgFW36y7IbQAc5-AKBu99tfY0PIuzq5jRLIfyHUarg/s698/Parallel%20workload.xlsx.5M,%200.99.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjk6rxpKAZw_5IVAVIdzGfBGahg0NLbxctPtUsvExLJPFDleXYAaqePi6a89imdguwfaVXZCXCk7xoMTPKq058eIC69kzbKxhgRNLrPzU-1zt_tcM_wq2oZLOQNrxOdCU9TfniNvt0TW64KnpOydHgFW36y7IbQAc5-AKBu99tfY0PIuzq5jRLIfyHUarg/w200-h129/Parallel%20workload.xlsx.5M,%200.99.png&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;5M updates, 45M lookups&lt;br /&gt;skew=0.01&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;5M updates, 45M lookups&lt;br /&gt;skew=0.5&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;5M updates, 45M lookups&lt;br /&gt;skew=0.99&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p dir=&quot;auto&quot;&gt;Again, bulk visitation increases performance noticeably. Please refer to the
&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_concurrent_flat_map&quot;&gt;benchmark site&lt;/a&gt;
for further information and results.&lt;/p&gt;
&lt;h2 dir=&quot;auto&quot; id=&quot;user-content-conclusions-and-next-steps&quot; tabindex=&quot;-1&quot;&gt;&lt;a name=&quot;conclusions-and-next-steps&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Conclusions and next steps&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;&lt;/h2&gt;&lt;p dir=&quot;auto&quot;&gt;Bulk visitation is an addition to the interface of &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;
and &lt;code&gt;boost::concurrent_flat_set&lt;/code&gt; that improves lookup performance by pipelining
the internal visitation operations for chunked groups of keys. The tradeoff
for this increased throughput is higher latency, as keys need to be batched
by the user code before issuing the &lt;code&gt;visit&lt;/code&gt; operation.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;The insights we have gained with bulk visitation for concurrent containers
can be leveraged for future Boost.Unordered features:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;In principle, insertion can also be made to operate in bulk mode, although
the resulting pipelined algorithm is likely more complex than in the
visitation case, and thus performance increases are expected to be lower.&lt;/li&gt;&lt;li&gt;Bulk visitation (and insertion) is directly applicable to non-concurrent
containers such as
&lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/unordered/doc/html/unordered.html#unordered_flat_map&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;boost::unordered_flat_map&lt;/code&gt;&lt;/a&gt;:
the main problem for this is one of interface design because we are not using
visitation here as the default lookup API (classical iterator-based
lookup is provided instead). Some possible options are:
&lt;ol dir=&quot;auto&quot;&gt;&lt;li&gt;Use visitation as in the concurrent case.&lt;/li&gt;&lt;li&gt;Use an iterator-based lookup API that outputs the resulting iterators to
some user-provided buffer (probably modelled as an output &quot;meta&quot; iterator taking
container iterators).&lt;/li&gt;&lt;/ol&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;Bulk visitation will be officially shipping in Boost 1.84 (December 2023) but is already
available by checking out the
&lt;a href=&quot;https://github.com/boostorg/unordered/&quot;&gt;Boost.Unordered repo&lt;/a&gt;. If you are interested
in this feature, please try it and report your local results and suggestions for
improvement. Your feedback on our current and future work is much welcome.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/792492574681886300/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2023/10/bulk-visitation-in-boostconcurrentflatm.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/792492574681886300'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/792492574681886300'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2023/10/bulk-visitation-in-boostconcurrentflatm.html' title='Bulk visitation in &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijST3J7HxWSPrQwhRoXQBiNXkaZ1Q4bzMNVcgVfevMOGG5iNde5LnNo7dbk8KIc7kPg9FHWMJ0DdYP6seHktxoNevDhkuVTq9I74-cStGjZ3AeueYRf1o4MHcOJPaTwUiqz34nGlnOXnfgMt3XQ_vhK8NfU3R2sKHvTtdXEWzyn3nQiLa8LdD7C2-fRJA/s72-w600-c/data_structure.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-7587804841869976956</id><published>2023-08-18T18:27:00.003+02:00</published><updated>2023-08-18T18:27:48.501+02:00</updated><title type='text'>User-defined class qualifiers in C++23</title><content type='html'>&lt;p&gt;It is generally known that type qualifiers (such as &lt;code&gt;const&lt;/code&gt; and &lt;code&gt;volatile&lt;/code&gt; in C++) can be regarded as &lt;a href=&quot;https://dl.acm.org/doi/pdf/10.1145/301618.301665&quot;&gt;a form of subtyping&lt;/a&gt;: for instance,&amp;nbsp;&lt;code&gt;const T&lt;/code&gt; is a &lt;i&gt;supertype&lt;/i&gt; of &lt;code&gt;T&lt;/code&gt; because the interface (available operations) of&amp;nbsp;&lt;code&gt;T&lt;/code&gt; are strictly wider than those of &lt;code&gt;const T&lt;/code&gt;. Foster et al. call a qualifier &lt;b&gt;q&lt;/b&gt; &lt;i&gt;positive&lt;/i&gt; if &lt;b&gt;q&lt;/b&gt;&amp;nbsp;&lt;code&gt;T&lt;/code&gt; is a supertype of&amp;nbsp;&lt;code&gt;T&lt;/code&gt;, and &lt;i&gt;negative&lt;/i&gt; it if is the other way around. Without real loss of generality, in what follows we only consider negative qualifiers, where&amp;nbsp;&lt;b&gt;q&lt;/b&gt;&amp;nbsp;&lt;code&gt;T&lt;/code&gt; is a &lt;i&gt;subtype&lt;/i&gt; of (extends the interface of) &lt;code&gt;T&lt;/code&gt;.&lt;/p&gt;&lt;p&gt;C++23 &lt;a href=&quot;https://en.cppreference.com/w/cpp/language/member_functions#Explicit_object_parameter&quot;&gt;explicit object parameters&lt;/a&gt; (coloquially known as &quot;deducing &lt;code&gt;this&lt;/code&gt;&quot;) allow for a particularly concise and effective realization of user-defined qualifiers for class types beyond what the language provides natively. For instance, this is a syntactically complete implementation of qualifier &lt;code&gt;mut&lt;/code&gt;, the dual/inverse of &lt;code&gt;const&lt;/code&gt; (not to be confused with &lt;code&gt;mutable&lt;/code&gt;):&lt;br /&gt;&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename T&amp;gt;&lt;br /&gt;struct mut: T&lt;br /&gt;{&lt;br /&gt;  using T::T;&lt;br /&gt;};&lt;br /&gt;&lt;br /&gt;template&amp;lt;typename T&amp;gt;&lt;br /&gt;T&amp;amp; as_const(T&amp;amp; x) { return x;}&lt;br /&gt;&lt;br /&gt;template&amp;lt;typename T&amp;gt;&lt;br /&gt;T&amp;amp; as_const(mut&amp;lt;T&amp;gt;&amp;amp; x) { return x;}&lt;br /&gt;&lt;br /&gt;struct X&lt;br /&gt;{&lt;br /&gt;  void foo() {}&lt;br /&gt;  void bar(this mut&amp;lt;X&amp;gt;&amp;amp;) {}&lt;br /&gt;};&lt;br /&gt;&lt;br /&gt;int main()&lt;br /&gt;{&lt;br /&gt;  mut&amp;lt;X&amp;gt; x;&lt;br /&gt;  x.foo();&lt;br /&gt;  x.bar();&lt;br /&gt;&lt;br /&gt;  auto&amp;amp; y = as_const(x);&lt;br /&gt;  y.foo();&lt;br /&gt;  y.bar(); // &lt;span class=&quot;linked-compiler-output-line&quot;&gt;error: cannot convert argument 1 from &#39;X&#39; to &#39;mut&amp;lt;X&amp;gt; &amp;amp;&#39;&lt;br /&gt;&lt;/span&gt;&lt;br /&gt;  X&amp;amp; z = x;&lt;br /&gt;  z.foo();&lt;br /&gt;  z.bar(); // &lt;span class=&quot;linked-compiler-output-line&quot;&gt;error: cannot convert argument 1 from &#39;X&#39; to &#39;mut&amp;lt;X&amp;gt; &amp;amp;&#39;&lt;br /&gt;&lt;/span&gt;}&lt;/pre&gt;&lt;/div&gt;&lt;p&gt;The class &lt;code&gt;X&lt;/code&gt; has a regular (generally accessible) member function &lt;code&gt;foo&lt;/code&gt; and then &lt;code&gt;bar&lt;/code&gt;, which is only accessible to instances of the form &lt;code&gt;mut&amp;lt;X&amp;gt;&lt;/code&gt;. Access checking and implicit and explicit conversion between subtype&amp;nbsp;&lt;code&gt;mut&amp;lt;X&amp;gt;&lt;/code&gt; and&amp;nbsp;&lt;code&gt;mut&amp;lt;X&amp;gt;&lt;/code&gt; work as expected.&lt;/p&gt;&lt;p&gt;With some help fom &lt;a href=&quot;https://boost.org/libs/mp11&quot;&gt;Boost.Mp11&lt;/a&gt;, the idiom can be generalized to the case of several qualifiers:&lt;br /&gt;&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;#include &amp;lt;boost/mp11/algorithm.hpp&amp;gt;&lt;br /&gt;#include &amp;lt;boost/mp11/list.hpp&amp;gt;&lt;br /&gt;#include &amp;lt;type_traits&amp;gt;&lt;br /&gt;&lt;br /&gt;template&amp;lt;typename T,typename... Qualifiers&amp;gt;&lt;br /&gt;struct access: T&lt;br /&gt;{&lt;br /&gt;  using qualifier_list=boost::mp11::mp_list&amp;lt;Qualifiers...&amp;gt;;&lt;br /&gt;&lt;br /&gt;  using T::T;&lt;br /&gt;};&lt;br /&gt;&lt;br /&gt;template&amp;lt;typename T, typename... Qualifiers&amp;gt;&lt;br /&gt;concept qualified =&lt;br /&gt;  (boost::mp11::mp_contains&amp;lt;&lt;br /&gt;    typename std::remove_cvref_t&amp;lt;T&amp;gt;::qualifier_list,&lt;br /&gt;    Qualifiers&amp;gt;::value &amp;amp;&amp;amp; ...);&lt;br /&gt;&lt;br /&gt;// some qualifiers&lt;br /&gt;struct mut;&lt;br /&gt;struct synchronized;&lt;br /&gt;&lt;br /&gt;template&amp;lt;typename T&amp;gt;&lt;br /&gt;concept is_mut =  qualified&amp;lt;T, mut&amp;gt;;&lt;br /&gt;&lt;br /&gt;template&amp;lt;typename T&amp;gt;&lt;br /&gt;concept is_synchronized = qualified&amp;lt;T, synchronized&amp;gt;;&lt;br /&gt;&lt;br /&gt;struct X&lt;br /&gt;{&lt;br /&gt;  void foo() {}&lt;br /&gt;&lt;br /&gt;  template&amp;lt;is_mut Self&amp;gt;&lt;br /&gt;  void bar(this Self&amp;amp;&amp;amp;) {} &lt;br /&gt;&lt;br /&gt;  template&amp;lt;is_synchronized Self&amp;gt;&lt;br /&gt;  void baz(this Self&amp;amp;&amp;amp;) {}&lt;br /&gt;&lt;br /&gt;  template&amp;lt;typename Self&amp;gt;&lt;br /&gt;  void qux(this Self&amp;amp;&amp;amp;)&lt;br /&gt;  requires qualified&amp;lt;Self, mut, synchronized&amp;gt;&lt;br /&gt;  {}&lt;br /&gt;};&lt;br /&gt;&lt;br /&gt;int main()&lt;br /&gt;{&lt;br /&gt;  access&amp;lt;X, mut&amp;gt; x;&lt;br /&gt;&lt;br /&gt;  x.foo();&lt;br /&gt;  x.bar();&lt;br /&gt;  x.baz(); // error: associated constraints are not satisfied&lt;br /&gt;  x.qux(); // error: associated constraints are not satisfied&lt;br /&gt;&lt;br /&gt;  X y;&lt;br /&gt;  x.foo();&lt;br /&gt;  y.bar(); // error: associated constraints are not satisfied&lt;br /&gt;&lt;br /&gt;  access&amp;lt;X, mut, synchronized&amp;gt; z;&lt;br /&gt;  z.bar();&lt;br /&gt;  z.baz();&lt;br /&gt;  z.qux();&lt;br /&gt;}&lt;/pre&gt;&lt;/div&gt;One difficulty remains, though:&lt;br /&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;int main()&lt;br /&gt;{&lt;br /&gt;  access&amp;lt;X, mut, synchronized&amp;gt; z;&lt;br /&gt;  //...&lt;br /&gt;  access&amp;lt;X, mut&amp;gt;&amp;amp; w=z; // error: &lt;span class=&quot;linked-compiler-output-line&quot;&gt;cannot convert from&lt;br /&gt;                       // &#39;access&amp;lt;X,mut,synchronized&amp;gt;&#39;&lt;br /&gt;                       // to &#39;access&amp;lt;X,mut&amp;gt; &amp;amp;&#39;&lt;/span&gt;&lt;br /&gt;}&lt;/pre&gt;&lt;/div&gt;&lt;code&gt;access&amp;lt;T,Qualifiers...&amp;gt;&amp;amp;&lt;/code&gt; converts to &lt;code&gt;T&amp;amp;&lt;/code&gt;, but not to&amp;nbsp;&lt;code&gt;access&amp;lt;T,Qualifiers2...&amp;gt;&amp;amp;&lt;/code&gt;&amp;nbsp; where &lt;code&gt;Qualifiers2&lt;/code&gt; is a subset of&amp;nbsp; &lt;code&gt;Qualifiers&lt;/code&gt; (for the mathematically inclined, qualifiers &lt;b&gt;q&lt;/b&gt;&lt;sub&gt;&lt;b&gt;1&lt;/b&gt;&lt;/sub&gt;, ... , &lt;b&gt;q&lt;/b&gt;&lt;sub&gt;&lt;b&gt;&lt;i&gt;N&lt;/i&gt;&lt;/b&gt;&lt;/sub&gt; over a type &lt;code&gt;T&lt;/code&gt; induce a &lt;a href=&quot;https://en.wikipedia.org/wiki/Lattice_(order)&quot;&gt;lattice&lt;/a&gt; of subtypes &lt;b&gt;Q&lt;/b&gt; &lt;code&gt;T&lt;/code&gt;, &lt;b&gt;Q&lt;/b&gt; ⊆ {&lt;b&gt;q&lt;/b&gt;&lt;sub&gt;&lt;b&gt;1&lt;/b&gt;&lt;/sub&gt;, ... , &lt;b&gt;q&lt;/b&gt;&lt;sub&gt;&lt;b&gt;&lt;i&gt;N&lt;/i&gt;&lt;/b&gt;&lt;/sub&gt;}, ordered by qualifier inclusion). Incurring undefined behavior, we could do the following:&lt;br /&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename T,typename... Qualifiers&amp;gt;&lt;br /&gt;struct access: T&lt;br /&gt;{&lt;br /&gt;  using qualifier_list=boost::mp11::mp_list&amp;lt;Qualifiers...&amp;gt;;&lt;br /&gt;&lt;br /&gt;  using T::T;&lt;br /&gt;&lt;br /&gt;  template&amp;lt;typename... Qualifiers2&amp;gt;&lt;br /&gt;  operator access&amp;lt;T, Qualifiers2...&amp;gt;&amp;amp;()&lt;br /&gt;  requires qualified&amp;lt;access, Qualifiers2...&amp;gt;&lt;br /&gt;  {&lt;br /&gt;    return reinterpret_cast&amp;lt;access&amp;lt;T, Qualifiers2...&amp;gt;&amp;amp;&amp;gt;(*this);&lt;br /&gt;  }&lt;br /&gt;};&lt;/pre&gt;&lt;/div&gt;A more interesting challenge is the following: As laid out, this technique implements &lt;i&gt;syntactic&lt;/i&gt; qualifier subtyping, but does not do anything towards enforcing the semantics associated to each qualifier: for instance, &lt;code&gt;synchronized&lt;/code&gt; should lock a mutex automatically, and a qualifier associated to some particular invariant should assert it after each invocation to a qualifier-constraied member function. I don&#39;t know if this functionality can be more or less easily integrated into the presented framework: feedback on the matter is much welcome.&lt;br /&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/7587804841869976956/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2023/08/user-defined-class-qualifiers-in-c23.html#comment-form' title='1 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/7587804841869976956'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/7587804841869976956'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2023/08/user-defined-class-qualifiers-in-c23.html' title='User-defined class qualifiers in C++23'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>1</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-1771705204450027343</id><published>2023-07-07T12:47:00.005+02:00</published><updated>2023-07-07T22:05:21.979+02:00</updated><title type='text'>Inside boost::concurrent_flat_map</title><content type='html'>&lt;p&gt;&lt;/p&gt;&lt;h1 style=&quot;text-align: left;&quot; tabindex=&quot;-1&quot;&gt;&lt;/h1&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;a href=&quot;#introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#state-of-the-art&quot;&gt;State of the art&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#design-principles&quot;&gt;Design principles&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#data-structure&quot;&gt;Data structure&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#algorithms&quot;&gt;Algorithms&lt;/a&gt;&lt;/li&gt;&lt;ul&gt;&lt;li&gt;&lt;a href=&quot;#lookup&quot;&gt;Lookup&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#insertion&quot;&gt;Insertion&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;&lt;li&gt;&lt;a href=&quot;#visitation-api&quot;&gt;Visitation API&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#benchmarks&quot;&gt;Benchmarks&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#conclusions-and-next-steps&quot;&gt;Conclusions and next steps&lt;/a&gt;&lt;br /&gt;&lt;/li&gt;&lt;/ul&gt;&lt;a name=&quot;introduction&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Introduction&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;Starting in Boost 1.83, &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/unordered/&quot; rel=&quot;nofollow&quot;&gt;Boost.Unordered&lt;/a&gt; provides
&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;, an associative container suitable for high-load parallel scenarios.
&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; leverages much of the work done for
&lt;a href=&quot;https://bannalia.blogspot.com/2022/11/inside-boostunorderedflatmap.html&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;boost::unordered_flat_map&lt;/code&gt;&lt;/a&gt;,
but also introduces innovations, particularly in the areas of low-contention
operation and API design, that we find worth discussing.&lt;/p&gt;
&lt;a name=&quot;state-of-the-art&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;State of the art&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;The space of C++ concurrent hashmaps spans a diversity of competing techniques, from
traditional ones such as lock-based structures or sharding, to very specialized approaches relying
on CAS instructions, hazard pointers, Read-Copy-Update (RCU), etc. We list some
prominent examples:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;a href=&quot;https://spec.oneapi.io/versions/latest/elements/oneTBB/source/containers/concurrent_hash_map_cls.html&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;tbb::concurrent_hash_map&lt;/code&gt;&lt;/a&gt;
uses closed addressing combined with bucket-level read-write locking. The bucket array is split
in a number of &lt;i&gt;segments&lt;/i&gt; to allow for incremental rehashing without locking the entire table.
Concurrent insertion, lookup and erasure are supported, but iterators are not thread safe.
Locked access to elements is done via so-called &lt;i&gt;accessors&lt;/i&gt;.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://spec.oneapi.io/versions/latest/elements/oneTBB/source/containers/concurrent_unordered_map_cls.html&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;tbb::concurrent_unordered_map&lt;/code&gt;&lt;/a&gt;
also uses closed addressing, but buckets are organized into lock-free
&lt;a href=&quot;https://dl.acm.org/doi/10.1145/1147954.1147958&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;split-ordered lists&lt;/i&gt;&lt;/a&gt;.
Concurrent insertion, lookup, and traversal are supported, whereas erasure is not
thread safe. Element access via iterators is not protected against data races.&lt;/li&gt;&lt;li&gt;&lt;i&gt;Sharding&lt;/i&gt; consists in dividing the hashmap into a fixed number &lt;i&gt;N&lt;/i&gt; of submaps indexed
by hash (typically, the element &lt;i&gt;x&lt;/i&gt; goes into the submap with index hash(&lt;i&gt;x&lt;/i&gt;) mod &lt;i&gt;N&lt;/i&gt;).
Sharding is extremely easy to implement starting from a non-concurrent hashmap and provides
incremental rehashing, but the degree of concurrency is limited by &lt;i&gt;N&lt;/i&gt;.
As an example,
&lt;a href=&quot;https://github.com/greg7mdp/gtl/blob/main/docs/phmap.md&quot;&gt;&lt;code&gt;gtl::parallel_flat_hash_map&lt;/code&gt;&lt;/a&gt; uses sharding
with submaps essentially derived from  &lt;a href=&quot;https://abseil.io/docs/cpp/guides/container&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;absl::flat_hash_map&lt;/code&gt;&lt;/a&gt;,
and inherits the excellent performance of this base container.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://efficient.github.io/libcuckoo/classlibcuckoo_1_1cuckoohash__map.html&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;libcuckoo::cuckoohash_map&lt;/code&gt;&lt;/a&gt;
adds efficient thread safety to classical
&lt;a href=&quot;https://en.wikipedia.org/wiki/Cuckoo_hashing&quot; rel=&quot;nofollow&quot;&gt;cuckoo hashing&lt;/a&gt; by means of a number of carefully
engineered &lt;a href=&quot;https://www.cs.princeton.edu/~mfreed/docs/cuckoo-eurosys14.pdf&quot; rel=&quot;nofollow&quot;&gt;techniques&lt;/a&gt;
including fine-grained locking of slot groups or &quot;strips&quot; (of size 4 by default),
optimistic insertion and data prefetching.&lt;/li&gt;&lt;li&gt;Meta&#39;s &lt;a href=&quot;https://github.com/facebook/folly/blob/main/folly/concurrency/ConcurrentHashMap.h&quot;&gt;&lt;code&gt;folly::ConcurrentHashMap&lt;/code&gt;&lt;/a&gt;
combines closed addressing, sharding and &lt;a href=&quot;https://en.wikipedia.org/wiki/Hazard_pointer&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;hazard pointers&lt;/i&gt;&lt;/a&gt;
to elements to achieve lock-free lookup (modifying operations such as insertion
and erasure lock the affected shard). Iterators, which internally hold a hazard
pointer to the element, can be validly dereferenced even after the element
has been erased from the map; access, on the other hand, is constant and
elements are basically treated as immutable.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://github.com/facebook/folly/blob/main/folly/docs/AtomicHashMap.md&quot;&gt;&lt;code&gt;folly::AtomicHashMap&lt;/code&gt;&lt;/a&gt;
is a very specialized hashmap that imposes severe usage restrictions in exchange for
very high time and space performance. Keys must be trivially copyable and 32 or 64 bits in size so that
they can be handled internally by means of atomic instructions; also, some key values
must be reserved to mark empty slots, tombstones and locked elements, so that
no extra memory is required for bookkeeping information and locks. The internal
data structure is based on open addressing with linear probing. Non-modifying
operations are lock-free. Rehashing is not provided: instead, extra bucket arrays are appended
when the map becomes full, the expectation being that the user provide the estimated final size at
construction time to avoid this rather inefficient growth mechanism. Element access
is not protected against data races.&lt;/li&gt;&lt;li&gt;On a more experimental/academic note, we can mention initiatives such as
&lt;a href=&quot;https://preshing.com/20160201/new-concurrent-hash-maps-for-cpp/&quot; rel=&quot;nofollow&quot;&gt;Junction&lt;/a&gt;,
&lt;a href=&quot;https://arxiv.org/pdf/1601.04017.pdf&quot; rel=&quot;nofollow&quot;&gt;Folklore&lt;/a&gt; and
&lt;a href=&quot;https://dl.acm.org/doi/pdf/10.1145/3552326.3587457&quot; rel=&quot;nofollow&quot;&gt;DRAMHiT&lt;/a&gt;. In general, these
do not provide industry-grade container implementations but explore interesting
ideas that could eventually be adopted by mainstream libraries, such as
&lt;a href=&quot;https://en.wikipedia.org/wiki/Read-copy-update&quot; rel=&quot;nofollow&quot;&gt;RCU&lt;/a&gt;-based data structures,
lock-free algorithms relying on
&lt;a href=&quot;https://en.wikipedia.org/wiki/Compare-and-swap&quot; rel=&quot;nofollow&quot;&gt;CAS&lt;/a&gt; and/or
&lt;a href=&quot;https://en.wikipedia.org/wiki/Transactional_memory&quot; rel=&quot;nofollow&quot;&gt;transactional memory&lt;/a&gt;,
parallel rehashing and operation batching.&lt;/li&gt;&lt;/ul&gt;
&lt;a name=&quot;design-principles&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Design principles&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;Unlike non-concurrent C++ containers, where the STL acts as a sort of
reference interface, concurrent hashmaps in the market
differ wildly in terms of requirements, API and provided functionality. When
designing &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;, we have aimed for a general-purpose
container&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;with no special restrictions on key and mapped types,&lt;/li&gt;&lt;li&gt;providing full thread safety without external synchronization mechanisms,&lt;/li&gt;&lt;li&gt;and disrupting as little as possible the conceptual and operational model
of &quot;traditional&quot; containers.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;These principles rule out some scenarios such as requiring that
keys be of an integral type or putting an extra burden on the user in terms
of access synchronization or active garbage collection. They also inform
concrete design decisions:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;code&gt;boost::concurrent_flat_map&amp;lt;Key, T, Hash, Pred, Allocator&amp;gt;&lt;/code&gt; must be a valid
instantiation in all practical cases where
&lt;code&gt;boost::unordered_flat_map&amp;lt;Key, T, Hash, Pred, Allocator&amp;gt;&lt;/code&gt; is.&lt;/li&gt;&lt;li&gt;Thread-safe value semantics are provided (including copy construction, assignment,
swap, etc.)&lt;/li&gt;&lt;li&gt;All member functions in &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; are provided by
&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; except if there&#39;s a fundamental reason why
they can&#39;t work safely or efficiently in a concurrent setting.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;The last guideline has the most impact on API design. In particular, we have
decided &lt;i&gt;not to provide iterators&lt;/i&gt;, either blocking or not: if not-blocking,
they&#39;re unsafe, and if blocking they increase contention when not properly used,
and can very easily lead to deadlocks:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; Thread 1&lt;/span&gt;
map_type::iterator it1=map.find(x1), it2=map.find(x2);

&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; Thread 2&lt;/span&gt;
map_type::iterator it2=map.find(x2), it1=map.find(x1);&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;In place of iterators, &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; offers an access API
based on internal visitation, as described in a later &lt;a href=&quot;#visitation-api&quot;&gt;section&lt;/a&gt;.&lt;/p&gt;
&lt;a name=&quot;data-structure&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Data structure&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; uses the same
&lt;a href=&quot;https://bannalia.blogspot.com/#boostunordered_flat_map-data-structure&quot; rel=&quot;nofollow&quot;&gt;open-addressing layout&lt;/a&gt;
as &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;, where the bucket array is split into
2&lt;sup&gt;&lt;i&gt;n&lt;/i&gt;&lt;/sup&gt; groups of &lt;i&gt;N&lt;/i&gt; = 15 slots and each group has an associated
16-byte metadata word for SIMD-based reduced-hash matching and insertion overflow
control.&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgqQsUWKZyjAuhnZkvVV6DvZQ6ytlHWwT-uo9jSonjlpvFeK0pFu8Lkn-Awvr2n0c8BSQqD_tyPxjTTW_ybWxbFMc-nyUF_MfehkQ4es7pVwyH_GIws29indL8Nzsjiu67qdJSROpd7Di4eVgx9reLqZBrRFFLf8CNjFk03nMVyH7Ze6mdiUAfvFzFN3E/s16000/data_structure.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;351&quot; data-original-width=&quot;931&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgqQsUWKZyjAuhnZkvVV6DvZQ6ytlHWwT-uo9jSonjlpvFeK0pFu8Lkn-Awvr2n0c8BSQqD_tyPxjTTW_ybWxbFMc-nyUF_MfehkQ4es7pVwyH_GIws29indL8Nzsjiu67qdJSROpd7Di4eVgx9reLqZBrRFFLf8CNjFk03nMVyH7Ze6mdiUAfvFzFN3E/w600/data_structure.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;On top of this layout, two synchronization levels are added:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;Container level: A read-write mutex is used to control access from any operation to the
container. This access is always requested in read mode (i.e. shared) except for operations
that require that the whole bucket array be replaced, like rehashing, swapping,
assignment, etc. This means that, in practice, this level of synchronization does
not cause any contention at all, even for modifying operations like insertion and
erasure. To reduce cache coherence traffic, the mutex is implemented as an array
of read-write spinlocks occupying separate cache lines, and each thread is
assigned one spinlock in a round-robin fashion at &lt;code&gt;thread_local&lt;/code&gt; construction time:
read/shared access does only involve the assigned spinlock, whereas write/exclusive
access, which is comparatively much rarer, requires that all spinlocks be locked.&lt;/li&gt;&lt;li&gt;Group level: Each group has a dedicated read-write spinlock to control access to its
slots, plus an atomic &lt;i&gt;insertion counter&lt;/i&gt; used for transactional optimistic insertion
as described below.&lt;/li&gt;&lt;/ul&gt;
&lt;a name=&quot;algorithms&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Algorithms&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;The core algorithms of &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; are variations of those of
&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; with minimal changes to prevent data races while keeping
group-level contention to a minimum.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;In the following diagrams, white boxes represent lock-free steps, while gray boxes
are executed within the scope of a group lock. Metadata is handled atomically both
in locked and lock-free scenarios.&lt;/p&gt;&lt;p&gt;
&lt;a name=&quot;lookup&quot;&gt;&lt;/a&gt;&lt;/p&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;a name=&quot;lookup&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Lookup &lt;br /&gt;&lt;/b&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;
&lt;/p&gt;&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD1yg-ZTZW9jAZEnAvGVb8MBF-GDLjjGNzL_H7oEqPU0qv-f3yeOaAlAl-mf5yRFnEQEY3iaq1Ms7Jg-nmQf8ZeFqxWC2DOsKgUFqfiIuHN-2nDhSs2Oz6l9JT4RroxxcYJnfg558CsPlYFWfi7yOi4QFGTLxKS6Ng8Zmmp-a9Ve7Ad2jRzESIyXo6cjY/s1222/lookup.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;605&quot; data-original-width=&quot;1222&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiD1yg-ZTZW9jAZEnAvGVb8MBF-GDLjjGNzL_H7oEqPU0qv-f3yeOaAlAl-mf5yRFnEQEY3iaq1Ms7Jg-nmQf8ZeFqxWC2DOsKgUFqfiIuHN-2nDhSs2Oz6l9JT4RroxxcYJnfg558CsPlYFWfi7yOi4QFGTLxKS6Ng8Zmmp-a9Ve7Ad2jRzESIyXo6cjY/w600/lookup.png&quot; witdh=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;Most steps of the lookup algorithm (hash calculation, probing, element
pre-checking via SIMD matching with the value&#39;s reduced hash) are lock-free and
do not synchronize with any operation on the metadata. When SIMD matching detects
a potential candidate, double-checking for slot occupancy and the
actual comparison with the element are done within the group lock; note
that the occupancy double check is necessary precisely because SIMD matching
is lock-free and the status of the identified slot may have changed before
group locking.&lt;/p&gt;
&lt;a name=&quot;insertion&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Insertion&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;&lt;/a&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBF4Si7cQVms5W1QzREohqt632hKN8zSbvlUwvo-7mZuFNl7jbsdy526MnuX_WmZBlpQeouNtQveo7cTL9wj2S_I6IlAW-4UIZ36g4V6-A8nuN1PTFMU4wz5--FoVkCP9LBz3Hiq_DKVVqjySwomj6C8a01iGvv2-oS6Q8ia4l4yijILeTQDNXNUZYwzA/s1222/insertion.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;616&quot; data-original-width=&quot;1222&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjBF4Si7cQVms5W1QzREohqt632hKN8zSbvlUwvo-7mZuFNl7jbsdy526MnuX_WmZBlpQeouNtQveo7cTL9wj2S_I6IlAW-4UIZ36g4V6-A8nuN1PTFMU4wz5--FoVkCP9LBz3Hiq_DKVVqjySwomj6C8a01iGvv2-oS6Q8ia4l4yijILeTQDNXNUZYwzA/w600/insertion.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;The main challenge of any concurrent insertion algorithm is to prevent an
element &lt;i&gt;x&lt;/i&gt; from being inserted twice by different threads running at the
same time. As open-addressing probing starts at a position &lt;i&gt;p&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt;
univocally determined by the hash value of &lt;i&gt;x&lt;/i&gt;, a naïve (and flawed) approach
is to lock &lt;i&gt;p&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt; for the entire duration of the insertion
procedure: this leads to deadlocking if the probing sequences of two
different elements intersect.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;We have implemented the following transactional optimistic insertion algorithm:
At the beginning of insertion, the value of the insertion counter for
the group at position &lt;i&gt;p&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt; is saved locally and insertion
proceeds normally, first checking that an element equivalent to &lt;i&gt;x&lt;/i&gt; does
not exist and then looking for available slots starting at &lt;i&gt;p&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt;
and locking only one group of the probing sequence at a time; when
an available slot is found, the associated metadata is updated,
the insertion counter at &lt;i&gt;p&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt; is incremented, and:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;If no other thread got in the way (i.e. if the pre-increment value of the counter
coincides with the local value stored at the beginning), then the transaction
is successful and insertion can be finished by storing the element into
the slot before releasing the group lock.&lt;/li&gt;&lt;li&gt;Otherwise, metadata changes are rolled back and the entire insertion process is
started over.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;Our measurements indicate that, even under adversarial situations, the
ratio of start-overs to successful insertions ranges in the parts per million.&lt;/p&gt;
&lt;a name=&quot;visitation-api&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Visitation API&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;From an operational point of view, container iterators serve two main purposes:
combining lookup/insertion with further access to the relevant element:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt; it = m.find(k);
&lt;span class=&quot;pl-k&quot;&gt;if&lt;/span&gt; (it != m.end()) {
  it-&amp;gt;&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt; = &lt;span class=&quot;pl-c1&quot;&gt;0&lt;/span&gt;;
}&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;and container traversal:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; iterators used internally by range-for&lt;/span&gt;
&lt;span class=&quot;pl-k&quot;&gt;for&lt;/span&gt;(&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; x: m) {
  x.&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt; = &lt;span class=&quot;pl-c1&quot;&gt;0&lt;/span&gt;;
}&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;Having decided that &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; not rely on iterators
due to their inherent concurrency problems, a design alternative is to move element
access into the container operations themselves, where it can be done in a
thread-safe manner. This is just a form of the familiar visitation pattern:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;m.visit(k, [](&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; x) {
  x.&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt; = &lt;span class=&quot;pl-c1&quot;&gt;0&lt;/span&gt;;
});

m.visit_all([](&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; x) {
  x.&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt; = &lt;span class=&quot;pl-c1&quot;&gt;0&lt;/span&gt;;
});&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; provides visitation-enabled variations
of classical map operations wherever it makes sense:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;code&gt;visit&lt;/code&gt;, &lt;code&gt;cvisit&lt;/code&gt; (in place of &lt;code&gt;find&lt;/code&gt;)&lt;/li&gt;&lt;li&gt;&lt;code&gt;visit_all&lt;/code&gt;, &lt;code&gt;cvisit_all&lt;/code&gt; (as a substitute of container traversal)&lt;/li&gt;&lt;li&gt;&lt;code&gt;emplace_or_visit&lt;/code&gt;, &lt;code&gt;emplace_or_cvisit&lt;/code&gt;&lt;/li&gt;&lt;li&gt;&lt;code&gt;insert_or_visit&lt;/code&gt;, &lt;code&gt;insert_or_cvisit&lt;/code&gt;&lt;/li&gt;&lt;li&gt;&lt;code&gt;try_emplace_or_visit&lt;/code&gt;, &lt;code&gt;try_emplace_or_cvisit&lt;/code&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;cvisit&lt;/code&gt; stands for constant visitation, that is, the visitation function
is granted read-only access to the element, which has less contention than
write access.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;Traversal functions &lt;code&gt;[c]visit_all&lt;/code&gt; and &lt;code&gt;erase_if&lt;/code&gt; have also parallel versions:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;m.visit_all(std::execution::par, [](&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt;&amp;amp; x) {
  x.&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt; = &lt;span class=&quot;pl-c1&quot;&gt;0&lt;/span&gt;;
});&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container&quot;&gt;
    &lt;/div&gt;&lt;/div&gt;
&lt;a name=&quot;benchmarks&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Benchmarks&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;We&#39;ve tested &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; against
&lt;a href=&quot;https://spec.oneapi.io/versions/latest/elements/oneTBB/source/containers/concurrent_hash_map_cls.html&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;tbb::concurrent_hash_map&lt;/code&gt;&lt;/a&gt;
and &lt;a href=&quot;https://github.com/greg7mdp/gtl/blob/main/docs/phmap.md&quot;&gt;&lt;code&gt;gtl::parallel_flat_hash_map&lt;/code&gt;&lt;/a&gt;
for the following synthetic scenario:
&lt;i&gt;T&lt;/i&gt; threads concurrently perform &lt;i&gt;N&lt;/i&gt; operations &lt;b&gt;update&lt;/b&gt;, &lt;b&gt;successful lookup&lt;/b&gt;
and &lt;b&gt;unsuccessful lookup&lt;/b&gt;, randomly chosen with probabilities 10%, 45% and 45%, respectively,
on a concurrent map of (&lt;code&gt;int&lt;/code&gt;, &lt;code&gt;int&lt;/code&gt;) pairs.
The keys used by all operations are also random, where &lt;b&gt;update&lt;/b&gt; and &lt;b&gt;successful lookup&lt;/b&gt; follow a
&lt;a href=&quot;https://en.wikipedia.org/wiki/Zipf%27s_law#Formal_definition&quot; rel=&quot;nofollow&quot;&gt;Zipf distribution&lt;/a&gt; over [1, &lt;i&gt;N&lt;/i&gt;/10]
with skew exponent &lt;i&gt;s&lt;/i&gt;, and &lt;b&gt;unsuccessful lookup&lt;/b&gt; follows a Zip distribution
with the same skew &lt;i&gt;s&lt;/i&gt; over an interval not overlapping with the former.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;We provide the full benchmark code and results for different 64- and 32-bit architectures in a
&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_concurrent_flat_map&quot;&gt;dedicated repository&lt;/a&gt;;
here, we just show as an example the plots for Visual Studio 2022 in x64 mode on an
AMD Ryzen 5 3600 6-Core @ 3.60 GHz without hyperthreading and 64 GB of RAM.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWafV9y6Ty-jtfoVHdezLwz6GVeyRq6xS5MlVJwdtnhOSgHUOsfYrPL-Rm8PRWD1glgeKX3CD0zTIlIiDScwkYxrm1R5EiqlepLdCBacSzQDRcqf1D4ZgQj9x8AL794xGCC0Wf0XVHHq0moqtSt89ap6eqOAqTgw1ELz2Lgsu1V9cJ0vKYJzbgvj8yCHQ/s698/Parallel%20workload.xlsx.500k,%200.01.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgWafV9y6Ty-jtfoVHdezLwz6GVeyRq6xS5MlVJwdtnhOSgHUOsfYrPL-Rm8PRWD1glgeKX3CD0zTIlIiDScwkYxrm1R5EiqlepLdCBacSzQDRcqf1D4ZgQj9x8AL794xGCC0Wf0XVHHq0moqtSt89ap6eqOAqTgw1ELz2Lgsu1V9cJ0vKYJzbgvj8yCHQ/w200-h129/Parallel%20workload.xlsx.500k,%200.01.png&quot; style=&quot;max-width: 100%;&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRgzObgcq-_Ji3rF2TKCd2kKqghItn9JDq011zt6TaGVkRnrA1b7n8tZENJ9aEMmdFJuQ7L3_il71JkcCSQr09l4T0aDsr4K_LsNTiWOVUB-nR5WYuq2HYzcOBdJzi0p7gsFRW1q-Xs8zOIlDt_mIs7W5FlyMxNBK8ywci0yXDOhlaXf3ItHbYyp6y4as/s698/Parallel%20workload.xlsx.500k,%200.5.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRgzObgcq-_Ji3rF2TKCd2kKqghItn9JDq011zt6TaGVkRnrA1b7n8tZENJ9aEMmdFJuQ7L3_il71JkcCSQr09l4T0aDsr4K_LsNTiWOVUB-nR5WYuq2HYzcOBdJzi0p7gsFRW1q-Xs8zOIlDt_mIs7W5FlyMxNBK8ywci0yXDOhlaXf3ItHbYyp6y4as/w200-h129/Parallel%20workload.xlsx.500k,%200.5.png&quot; style=&quot;max-width: 100%;&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZiMciXCLSSNoNiNMDwxnGETmO9E7nYYqqzy9i0tY5yAUSzaqwh_OaBWpTLqK6yzQ8SaQiU5ZbdskiKRLFDqgQuTG2KiOVJPIshDf0-ptvabsCQ6KT1i7ahHkZZWOrWSSeAoGSHvaFX-AaIr-N9QcsK0MnQeRqzu05wAn7uHv_hA9AcHArvHUbdQ9d3_0/s698/Parallel%20workload.xlsx.500k,%200.99.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiZiMciXCLSSNoNiNMDwxnGETmO9E7nYYqqzy9i0tY5yAUSzaqwh_OaBWpTLqK6yzQ8SaQiU5ZbdskiKRLFDqgQuTG2KiOVJPIshDf0-ptvabsCQ6KT1i7ahHkZZWOrWSSeAoGSHvaFX-AaIr-N9QcsK0MnQeRqzu05wAn7uHv_hA9AcHArvHUbdQ9d3_0/w200-h129/Parallel%20workload.xlsx.500k,%200.99.png&quot; style=&quot;max-width: 100%;&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;500k updates, 4.5M lookups&lt;br /&gt;skew=0.01&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;500k updates, 4.5M lookups&lt;br /&gt;skew=0.5&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;500k updates, 4.5M lookups&lt;br /&gt;skew=0.99&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWtuJ4mMtBuI8fl0MfuQFFGRsMsuobAvxilnZ_TiUYWMtL9WnMgVbpFyNm7Mp24AjjXbK0yhwJ4FoAq4WW7amKiEMW-ftzJPBsQ6v2GSsiMhbC1mHuwEUEDnBHJjZQXuC8aKCRIve6IpZWvp0UH9AfHfTb9ZrhKY_6Sj-3OnzWc-WpsGUFrYsuLQjfzuw/s698/Parallel%20workload.xlsx.5M,%200.01.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhWtuJ4mMtBuI8fl0MfuQFFGRsMsuobAvxilnZ_TiUYWMtL9WnMgVbpFyNm7Mp24AjjXbK0yhwJ4FoAq4WW7amKiEMW-ftzJPBsQ6v2GSsiMhbC1mHuwEUEDnBHJjZQXuC8aKCRIve6IpZWvp0UH9AfHfTb9ZrhKY_6Sj-3OnzWc-WpsGUFrYsuLQjfzuw/s320/Parallel%20workload.xlsx.5M,%200.01.png&quot; style=&quot;max-width: 100%;&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9hNnnSQsMyXv0VtgW9BFmmqQavv8m8ooiSmrUhhAwSDwvoWxb-dap4rR1Oj8ZDM7ODFBFB5ITHN84u9kD1929XTleQRY55LtdkhCziEA_p0p0i6VI0IFYlWnAZmzNHD8c_4nGPfYkPbnYauye-zc8fMT9HSqMKvaa28ezWiQ6mmQPUfaF7ttvj6rYYgQ/s698/Parallel%20workload.xlsx.5M,%200.5.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9hNnnSQsMyXv0VtgW9BFmmqQavv8m8ooiSmrUhhAwSDwvoWxb-dap4rR1Oj8ZDM7ODFBFB5ITHN84u9kD1929XTleQRY55LtdkhCziEA_p0p0i6VI0IFYlWnAZmzNHD8c_4nGPfYkPbnYauye-zc8fMT9HSqMKvaa28ezWiQ6mmQPUfaF7ttvj6rYYgQ/s320/Parallel%20workload.xlsx.5M,%200.5.png&quot; style=&quot;max-width: 100%;&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYsHan-Z-IYPffC6TEPPYFG-6C-taBuBo-UAEFOKFNrDoKj5L--STowUp9c7QeokmpeEj8eiQSTg_drKj-RckuVeuBhQM5B9E6Yc4pO5NlgCZAaNItENbi2p7iRkbEY-td8bzIwFPxSLZEnzJiqWMrFRyBgk1sVBPXPSI-gHbjTM3KDPt5IyjfDKih9a0/s698/Parallel%20workload.xlsx.5M,%200.99.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiYsHan-Z-IYPffC6TEPPYFG-6C-taBuBo-UAEFOKFNrDoKj5L--STowUp9c7QeokmpeEj8eiQSTg_drKj-RckuVeuBhQM5B9E6Yc4pO5NlgCZAaNItENbi2p7iRkbEY-td8bzIwFPxSLZEnzJiqWMrFRyBgk1sVBPXPSI-gHbjTM3KDPt5IyjfDKih9a0/s320/Parallel%20workload.xlsx.5M,%200.99.png&quot; style=&quot;max-width: 100%;&quot; width=&quot;180&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;5M updates, 45M lookups&lt;br /&gt;skew=0.01&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;5M updates, 45M lookups&lt;br /&gt;skew=0.5&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;5M updates, 45M lookups&lt;br /&gt;skew=0.99&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;/p&gt;&lt;p dir=&quot;auto&quot;&gt;Note that, for the scenario with 500k updates, &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;
continues to improve after the number of threads exceed the number of cores (6),
a phenomenon for which we don&#39;t have a readily explanation —we could hypothesize
that execution is limited by memory latency, but the behavior does
not reproduce in the scenario with 5M updates, where the cache miss ratio is
necessarily higher. Note also that &lt;code&gt;gtl::parallel_flat_hash_map&lt;/code&gt; performs
comparatively worse for high-skew scenarios where the load is concentrated on
a very small number of keys: this may be due to &lt;code&gt;gtl::parallel_flat_hash_map&lt;/code&gt;
having a much coarser lock granularity (256 shards in the configuration used) than
the other two containers.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;In general, results are very dependent on the particular CPU and memory system used;
you are welcome to try out the benchmark in your architecture of interest and
report back.&lt;/p&gt;
&lt;a name=&quot;conclusions-and-next-steps&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Conclusions and next steps&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; is a new, general-purpose concurrent hashmap that
leverages the very performant open-addressing techniques of &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;
and provides a fully thread-safe, iterator-free API we hope future users will
find flexible and convenient.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;We are considering a number of new functionalities for upcoming releases:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;As &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; and &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; basically
share the same data layout, it&#39;s possible to efficiently implement move
construction from one to another by simply transferring the internal structure.
There are scenarios where this feature can lead to more performant
execution, like, for instance, multithreaded population of a
&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt; followed by single- or multithreaded
read-only lookup on a &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; move-constructed from
the former.&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;https://dl.acm.org/doi/pdf/10.1145/3552326.3587457&quot; rel=&quot;nofollow&quot;&gt;DRAMHiT&lt;/a&gt; shows
that pipelining/batching several map operations on the same thread
in combination with heavy memory prefetching can reduce or eliminate
waiting CPU cycles. We have conducted some preliminary experiments
using this idea for a feature we dubbed &lt;i&gt;bulk lookup&lt;/i&gt;
(providing an array of keys to look for at once), with promising results.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;We&#39;re launching this new container with trepidation: we cannot possibly
try the vast array of different CPU architectures and scenarios
where concurrent hashmaps are used, and we don&#39;t have yet field data on
the suitability of the novel API we&#39;re proposing for
&lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;. For these reasons, your feedback
and proposals for improvement are most welcome.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/1771705204450027343/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2023/07/inside-boostconcurrentflatmap.html#comment-form' title='2 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/1771705204450027343'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/1771705204450027343'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2023/07/inside-boostconcurrentflatmap.html' title='Inside &lt;code&gt;boost::concurrent_flat_map&lt;/code&gt;'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhgqQsUWKZyjAuhnZkvVV6DvZQ6ytlHWwT-uo9jSonjlpvFeK0pFu8Lkn-Awvr2n0c8BSQqD_tyPxjTTW_ybWxbFMc-nyUF_MfehkQ4es7pVwyH_GIws29indL8Nzsjiu67qdJSROpd7Di4eVgx9reLqZBrRFFLf8CNjFk03nMVyH7Ze6mdiUAfvFzFN3E/s72-w600-c/data_structure.png" height="72" width="72"/><thr:total>2</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-6276904501422653074</id><published>2022-11-18T13:23:00.004+01:00</published><updated>2025-04-30T09:53:59.636+02:00</updated><title type='text'>Inside boost::unordered_flat_map</title><content type='html'>&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;a href=&quot;#introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#the-case-for-open-addressing&quot;&gt;The case for open addressing&lt;/a&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;a href=&quot;#simd-accelerated-lookup&quot;&gt;SIMD-accelerated lookup&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#boostunordered_flat_map-data-structure&quot;&gt;&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; data structure&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#rehashing&quot;&gt;Rehashing&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#hash-post-mixing&quot;&gt;Hash post-mixing&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#statistical-properties-of-boostunordered_flat_map&quot;&gt;Statistical properties of &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#benchmarks&quot;&gt;Benchmarks&lt;/a&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;a href=&quot;#running-n-plots&quot;&gt;Running-&lt;i&gt;n&lt;/i&gt; plots&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#aggregate-performance&quot;&gt;Aggregate performance&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#deviations-from-the-standard&quot;&gt;Deviations from the standard&lt;/a&gt;&lt;/li&gt;&lt;li&gt;&lt;a href=&quot;#conclusions-and-next-steps&quot;&gt;Conclusions and next steps&lt;/a&gt;&lt;/li&gt;&lt;/ul&gt;
&lt;a name=&quot;introduction&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Introduction&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;Starting in Boost 1.81 (December 2022), &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/unordered/&quot; rel=&quot;nofollow&quot;&gt;Boost.Unordered&lt;/a&gt;
provides, in addition to its previous implementations of C++ unordered associative containers,
the new containers &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; and &lt;code&gt;boost::unordered_flat_set&lt;/code&gt; (for the sake
of brevity, we will only refer to the former in the remaining of this article).
If &lt;code&gt;boost::unordered_map&lt;/code&gt; strictly adheres to the C++ specification for &lt;code&gt;std::unordered_map&lt;/code&gt;,
&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; deviates in a number of ways from the standard
to offer dramatic performance improvements in exchange; in fact, &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;
ranks amongst the fastest hash containers currently available to C++ users.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;We describe the internal structure of &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; and provide
theoretical analyses and benchmarking data to help readers gain insights into
the key design elements behind this container&#39;s excellent performance.
Interface and behavioral differences with the standard are also discussed.&lt;/p&gt;
&lt;a name=&quot;the-case-for-open-addressing&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;The case for open addressing&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;We have &lt;a href=&quot;https://bannalia.blogspot.com/2022/06/advancing-state-of-art-for.html&quot; rel=&quot;nofollow&quot;&gt;previously discussed&lt;/a&gt; why
&lt;i&gt;closed addressing&lt;/i&gt; was chosen back in 2003 as the implicit layout for &lt;code&gt;std::unordered_map&lt;/code&gt;.
20 years after, &lt;a href=&quot;https://en.wikipedia.org/wiki/Hash_table#Open_addressing&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;open addressing&lt;/i&gt;&lt;/a&gt;
techniques have taken the lead in terms of performance, and the fastest hash containers in the market
all rely on some variation of open addressing, even if that means that some deviations have to be introduced
from the baseline interface of &lt;code&gt;std::unordered_map&lt;/code&gt;.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;The defining aspect of open addressing is that elements are stored directly within the
bucket array (as opposed to closed addressing, where multiple elements can be held into the same
bucket, usually by means of a linked list of nodes). In modern CPU architectures, this layout
is extremely cache friendly:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;There&#39;s no indirection needed to go from the bucket position to the element contained.&lt;/li&gt;&lt;li&gt;Buckets are stored contiguously in memory, which improves cache locality.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;The main technical challenge introduced by open addressing is what to do when elements
are mapped into the same bucket, i.e. when a &lt;i&gt;collision&lt;/i&gt; happens: in fact, all open-addressing
variations are basically characterized by their collision management techniques.
We can divide these techniques into two broad classes:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;b&gt;Non-relocating:&lt;/b&gt; if an element is mapped to an occupied bucket, a &lt;i&gt;probing sequence&lt;/i&gt; is
started from that position until a vacant bucket is located, and the element is inserted
there &lt;i&gt;permanently&lt;/i&gt; (except, of course, if the element is deleted or if the bucket array is grown and elements &lt;i&gt;rehashed&lt;/i&gt;).
Popular probing mechanisms are &lt;i&gt;linear probing&lt;/i&gt; (buckets inspected at regular intervals),
&lt;i&gt;quadratic probing&lt;/i&gt; and &lt;a href=&quot;https://en.wikipedia.org/wiki/Double_hashing&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;double hashing&lt;/i&gt;&lt;/a&gt;.
There is a tradeoff between cache locality, which is better when the buckets probed are close
to each other, and &lt;i&gt;average probe length&lt;/i&gt; (the expected number of buckets probed until a
vacant one is located), which grows larger (worse) precisely when probed buckets
are close —elements tend to form clusters instead of spreading uniformly throughout the bucket
array.&lt;/li&gt;&lt;li&gt;&lt;b&gt;Relocating:&lt;/b&gt; as part of the search process for a vacant bucket, elements can be
moved from their position to make room for the new element. This is done in order
to improve cache locality by keeping elements close to their &quot;natural&quot; location
(that indicated by the hash → bucket mapping). Well known relocating algorithms are
&lt;a href=&quot;https://en.wikipedia.org/wiki/Cuckoo_hashing&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;cuckoo hashing&lt;/i&gt;&lt;/a&gt;,
&lt;a href=&quot;https://en.wikipedia.org/wiki/Hopscotch_hashing&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;hopscotch hashing&lt;/i&gt;&lt;/a&gt; and
&lt;a href=&quot;https://en.wikipedia.org/wiki/Hash_table#Robin_Hood_hashing&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;Robin Hood hashing&lt;/i&gt;&lt;/a&gt;.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;If we take it as an important consideration to stay reasonably close to the original behavior
of &lt;code&gt;std::unordered_map&lt;/code&gt;, relocating techniques pose the problem that &lt;code&gt;insert&lt;/code&gt; may invalidate
iterators to other elements (so, they work more like &lt;code&gt;std::vector::insert&lt;/code&gt;).&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;On the other hand, non-relocating open addressing faces issues on deletion: lookup
starts at the original hash → bucket position and then keeps probing till the element is found
&lt;i&gt;or probing terminates&lt;/i&gt;, which is signalled by the presence of a vacant bucket:&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCm7Mjoizsg_qsNOTa1nU3YIjPDdarUGBnuxC9eiMMXtR4zWnnWXLQLE3RGgXN203SLJcIJfGM7a25uOLapGYJtmcOIeU8yXkrhDM1bK1LmxYESoY_fPohrGtPQuggjHgivzVWDVfqND1ZvYv1MSBexyBMyZFFyN7VPsRt5opZfLlo1m2rZyLEpQ0_/s484/probe.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;62&quot; data-original-width=&quot;484&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCm7Mjoizsg_qsNOTa1nU3YIjPDdarUGBnuxC9eiMMXtR4zWnnWXLQLE3RGgXN203SLJcIJfGM7a25uOLapGYJtmcOIeU8yXkrhDM1bK1LmxYESoY_fPohrGtPQuggjHgivzVWDVfqND1ZvYv1MSBexyBMyZFFyN7VPsRt5opZfLlo1m2rZyLEpQ0_/s16000/probe.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;So, erasing an element can&#39;t just restore its holding bucket as vacant, since that would preclude
lookup from reaching elements further down the probe sequence:&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_UsPi-cCOsXgja14-LV4lFFY6tKVIq2XB_gB94qhDpZleeQh4fV7M02HgW72oGTFdjWxDbPSqgq65TkP9mY9KAz1OhQTcnDOE6k89eoHQucMK1vVmBoTi71pxyXm1GJlqGOzeTCUAcPllFUWdMvbXUkiT--s2drQO9NSJQITkhaI6aBkB5vhTvC89/s484/probe_interrupted.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;62&quot; data-original-width=&quot;484&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi_UsPi-cCOsXgja14-LV4lFFY6tKVIq2XB_gB94qhDpZleeQh4fV7M02HgW72oGTFdjWxDbPSqgq65TkP9mY9KAz1OhQTcnDOE6k89eoHQucMK1vVmBoTi71pxyXm1GJlqGOzeTCUAcPllFUWdMvbXUkiT--s2drQO9NSJQITkhaI6aBkB5vhTvC89/s16000/probe_interrupted.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;A common techique to deal with this problem is to label buckets previously containing an
element with a &lt;i&gt;tombstone&lt;/i&gt; marker: tombstones are good for inserting new elements but do not
stop probing on lookup:&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwq8kJr-DhGz29OqD_6PBYRQ5AIHa3x3_B3ZqWA09HRs6hKfp8pC46WjUAIbrMcMB5SU9hvfXW2G2h8ui8IcEY72fhNljgDgJx_VPKi3ZmWKQ7vDAzfD07sMrqF78XiV7MDH_LbAJDyVxq655gqA5Td1RJVQV1LWoUVbFkhIHB6GEwDCo0Cc5E3SMs/s484/probe_tombstone.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;62&quot; data-original-width=&quot;484&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgwq8kJr-DhGz29OqD_6PBYRQ5AIHa3x3_B3ZqWA09HRs6hKfp8pC46WjUAIbrMcMB5SU9hvfXW2G2h8ui8IcEY72fhNljgDgJx_VPKi3ZmWKQ7vDAzfD07sMrqF78XiV7MDH_LbAJDyVxq655gqA5Td1RJVQV1LWoUVbFkhIHB6GEwDCo0Cc5E3SMs/s16000/probe_tombstone.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;Note that the introduction of tombstones implies that the average lookup probe length of the
container won&#39;t decrease on deletion —again, special measures can be taken to counter this.&lt;/p&gt;
&lt;a name=&quot;simd-accelerated-lookup&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;SIMD-accelerated lookup&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;SIMD technologies, such as &lt;a href=&quot;https://en.wikipedia.org/wiki/SSE2&quot; rel=&quot;nofollow&quot;&gt;SSE2&lt;/a&gt; and
&lt;a href=&quot;https://en.wikipedia.org/wiki/ARM_architecture_family#Advanced_SIMD_(Neon)&quot; rel=&quot;nofollow&quot;&gt;Neon&lt;/a&gt;,
provide advanced CPU instructions for parallel arithmetic and logical operations
on groups of contiguous data values: for instance, SSE2 &lt;a href=&quot;https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_cmpeq_epi8&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;_mm_cmpeq_epi8&lt;/code&gt;&lt;/a&gt; takes two packs of 16 bytes and compares
them for equality &lt;i&gt;pointwise&lt;/i&gt;, returning the result as another pack of bytes. Although
SIMD was originally meant for acceleration of multimedia processing applications,
the implementors of some unordered containers, notably Google&#39;s
&lt;a href=&quot;https://abseil.io/about/design/swisstables&quot; rel=&quot;nofollow&quot;&gt;Abseil&#39;s Swiss tables&lt;/a&gt; and
Meta&#39;s &lt;a href=&quot;https://engineering.fb.com/2019/04/25/developer-tools/f14/&quot; rel=&quot;nofollow&quot;&gt;F14&lt;/a&gt;, realized
they could leverage this technology to improve lookup times in hash tables.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;The key idea is to maintain, in addition to the bucket array itself, a separate &lt;i&gt;metadata
array&lt;/i&gt; holding &lt;i&gt;reduced hash values&lt;/i&gt; (usually one byte in size)
obtained from the hash values of the elements stored in the corresponding buckets.
When looking up for an element, SIMD can be used on a pack of contiguous reduced
hash values to quickly discard non-matching buckets and move on to full comparison
for matching positions. This technique effectively checks a moderate number
of buckets (16 for Abseil, 14 for F14) in constant time. Another beneficial effect
of this approach is that special bucket markers (vacant, tombstone, etc.) can be
moved to the metadata array —otherwise, these markers would take up extra space in
the bucket itself, or else some representation values of the elements would have
to be restricted from user code and reserved for marking purposes.&lt;/p&gt;
&lt;a name=&quot;boostunordered_flat_map-data-structure&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; data structure&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2oLcYKndxyhp0OW5b3xdoptzjKHjyLp_udDkmFb94SZzgpWPJqEUrad-unp_PNsrfKEkRQGapNWd3qxxzF8_s1bAEr4Rx4vKC2o9e-RxyBHwCeM7YIUALAHxMuOqr72kXWs-79J2lzc27B2op7-hawdTOrTfOoOB_c-TjEidw2pUvs3Es7btmNS29/s935/data_structure.png&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;124&quot; data-original-width=&quot;935&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2oLcYKndxyhp0OW5b3xdoptzjKHjyLp_udDkmFb94SZzgpWPJqEUrad-unp_PNsrfKEkRQGapNWd3qxxzF8_s1bAEr4Rx4vKC2o9e-RxyBHwCeM7YIUALAHxMuOqr72kXWs-79J2lzc27B2op7-hawdTOrTfOoOB_c-TjEidw2pUvs3Es7btmNS29/w600/data_structure.png&quot; width=&quot;600&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;br /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;boost::unordered_flat_map&lt;/code&gt;&#39;s bucket array is logically split into 2&lt;sup&gt;&lt;i&gt;n&lt;/i&gt;&lt;/sup&gt;
groups of &lt;i&gt;N&lt;/i&gt; = 15 buckets, and has a companion metadata array consisting of
2&lt;sup&gt;&lt;i&gt;n&lt;/i&gt;&lt;/sup&gt; 16-byte words. Hash mapping is done at the group level rather than
on individual buckets: so, to insert an element with hash value &lt;i&gt;h&lt;/i&gt;, the group
at position &lt;i&gt;h&lt;/i&gt; / 2&lt;sup&gt;&lt;i&gt;W&lt;/i&gt; − &lt;i&gt;n&lt;/i&gt;&lt;/sup&gt; is selected and its first available bucket
used (&lt;i&gt;W&lt;/i&gt; is 64 or 32 depending on whether the CPU architecture is 64- or 32-bit,
respectively); if the group is full, further groups are checked using a quadratic
probing sequence.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;The associated metadata is organized as follows (least significant byte depicted
rightmost):&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRBJiNHDjc3rW42atOBgkc2HL8zJ7gif-HzCENLnxIdvU8Fi6-RuA-8dcj-ZtCM8H13CiKZH6e2xkq7EoEqNGiDTFGyXNldDOGJKxBQOk9X6G40DxKwQpzWypY0BCFBRPSDI3O0HZ1Gfya33hegTrJuHOyopSYlrWM3aAcNU8-JsJCyyRrCoDtGBm9/s550/metadata.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;188&quot; data-original-width=&quot;550&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhRBJiNHDjc3rW42atOBgkc2HL8zJ7gif-HzCENLnxIdvU8Fi6-RuA-8dcj-ZtCM8H13CiKZH6e2xkq7EoEqNGiDTFGyXNldDOGJKxBQOk9X6G40DxKwQpzWypY0BCFBRPSDI3O0HZ1Gfya33hegTrJuHOyopSYlrWM3aAcNU8-JsJCyyRrCoDtGBm9/s16000/metadata.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;i&gt;h&lt;/i&gt;&lt;sub&gt;&lt;i&gt;i&lt;/i&gt;&lt;/sub&gt; holds information about the &lt;i&gt;i&lt;/i&gt;-th bucket of the group:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;0 if the bucket is empty,&lt;/li&gt;&lt;li&gt;1 to signal a &lt;i&gt;sentinel&lt;/i&gt; (a special value at the end of the bucket array used to
finish container iteration).&lt;/li&gt;&lt;li&gt;otherwise, a reduced hash value in the range [2, 255] obtained from the least
significant byte of the element&#39;s hash value.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;When looking up within a group for an element with hash value &lt;i&gt;h&lt;/i&gt;, SIMD operations,
if available, are used to match the reduced value of &lt;i&gt;h&lt;/i&gt; against the pack of
values {&lt;i&gt;h&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt;, &lt;i&gt;h&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;, ... , &lt;i&gt;h&lt;/i&gt;&lt;sub&gt;14&lt;/sub&gt;}. Locating
an empty bucket for insertion is equivalent to matching for 0.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;i&gt;ofw&lt;/i&gt; is the so-called &lt;i&gt;overflow byte&lt;/i&gt;: when  inserting an element with hash value &lt;i&gt;h&lt;/i&gt;,
if the group is full then the (&lt;i&gt;h&lt;/i&gt; mod 8)-th bit of &lt;i&gt;ofw&lt;/i&gt; is set to 1 before
moving to the next group in the probing sequence. Lookup probing can then terminate
when the corresponding overflow bit is 0. Note that this procedure removes the need
to use tombstones.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;If neither SSE2 nor Neon is available on the target architecture, the logical
organization of metadata stays the same, but information is mapped to two physical
64-bit words using &lt;i&gt;bit interleaving&lt;/i&gt; as shown in the figure:&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJISrbaZdbPuhdt62zJsqEtrcpkKg_QvBgLgziJu9GWy7lm2G3P_RSMV2oExl90Fk_d9w-Lt_00Rce_0OL_3QWd1SQXqOdl4Flc8ODvGSB4WTL6GXAuXJcplFittqFRKhgi1NBhz5mO6o6VUgLSmKyMQRNb65QplSg6tvzdiTYdQmsMKrTfnNOS-rR/s560/metadata_interleaving.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;140&quot; data-original-width=&quot;560&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJISrbaZdbPuhdt62zJsqEtrcpkKg_QvBgLgziJu9GWy7lm2G3P_RSMV2oExl90Fk_d9w-Lt_00Rce_0OL_3QWd1SQXqOdl4Flc8ODvGSB4WTL6GXAuXJcplFittqFRKhgi1NBhz5mO6o6VUgLSmKyMQRNb65QplSg6tvzdiTYdQmsMKrTfnNOS-rR/s16000/metadata_interleaving.png&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;Bit interleaving allows for a reasonably fast implementation of matching operations
in the absence of SIMD.&lt;/p&gt;
&lt;a name=&quot;rehashing&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Rehashing&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;The maximum load factor of &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; is 0.875 and can&#39;t be changed
by the user. As discussed previously, non-relocating open addressing has the problem
that average probe length doesn&#39;t decrease on deletion when the erased elements
are in mid-sequence: so, continously inserting and erasing elements without triggering
a rehash will slowly degrade the container&#39;s performance; we call this phenomenon
&lt;i&gt;drifting&lt;/i&gt;. &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; introduces the following anti-drift mechanism:
rehashing is controled by the container&#39;s &lt;i&gt;maximum load&lt;/i&gt;, initially 0.875 times the
size of the bucket array; when erasing an element whose associated overflow bit is not
zero, the maximum load is decreased by one. Anti-drift guarantees that rehashing
will be eventually triggered in a scenario of repeated insertions and deletions.&lt;/p&gt;
&lt;a name=&quot;hash-post-mixing&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Hash post-mixing&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;It is well known that open-addressing containers require that the hash function be
of good quality, in the sense that close input values (for some natural notion of
closeness) are mapped to distant hash values. In particular, a hash function is
said to have the &lt;a href=&quot;https://en.wikipedia.org/wiki/Avalanche_effect&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;avalanching property&lt;/i&gt;&lt;/a&gt;
if flipping a bit in the physical representation of the input changes all bits of the
output value with probability 50%. Note that avalanching hash functions are extremely well
behaved, and less stringent behaviors are generally good enough in most open-addressing
scenarios.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;Being a general-purpose container, &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; does not impose any
condition on the user-provided hash function beyond what is required by the C++
standard for unordered associative containers. In order to cope with poor-quality
hash functions (such as the identity for integral types), an automatic bit-mixing stage
is added to hash values:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;64-bit architectures: we use the &lt;code&gt;xmx&lt;/code&gt; function defined in Jon Maiga&#39;s
&lt;a href=&quot;http://jonkagstrom.com/bit-mixer-construction/index.html&quot; rel=&quot;nofollow&quot;&gt;&quot;The construct of a bit mixer&quot;&lt;/a&gt;.&lt;/li&gt;&lt;li&gt;32-bit architectures: the chosen mixer has been automatically generated by
&lt;a href=&quot;https://github.com/skeeto/hash-prospector&quot;&gt;Hash Function Prospector&lt;/a&gt; and selected as the
best overall performer in internal benchmarks. Score assigned by Hash Prospector: 333.7934929677524.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;There&#39;s an opt-out mechanism available to end users so that avalanching hash functions
can be marked as such and thus be used without post-mixing. In particular,
the specializations of &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/container_hash/&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;boost::hash&lt;/code&gt;&lt;/a&gt;
for string types are marked as avalanching.&lt;/p&gt;
&lt;a name=&quot;statistical-properties-of-boostunordered_flat_map&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Statistical properties of &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;We have written a &lt;a href=&quot;https://github.com/joaquintides/boost_unordered_flat_map_stats&quot;&gt;simulation program&lt;/a&gt;
to calculate some statistical properties
of &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; as compared with Abseil&#39;s &lt;code&gt;absl::flat_hash_map&lt;/code&gt;,
which is generally regarded as one of the fastest hash containers available.
For the purposes of this analysis, the main design characteristics of
&lt;code&gt;absl::flat_hash_map&lt;/code&gt; are:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;Bucket array sizes are of the form 2&lt;i&gt;&lt;sup&gt;n&lt;/sup&gt;&lt;/i&gt;, &lt;i&gt;n&lt;/i&gt; ≥ 4.&lt;/li&gt;&lt;li&gt;Hash mapping is done at the bucket level (rather than at the group
level as in &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;).&lt;/li&gt;&lt;li&gt;Metadata consists of one byte per bucket, where the most significant bit is set to 1
if the bucket is empty, deleted (tombstone) or a sentinel. The remaining 7 bits
hold the reduced hash value for occupied buckets.&lt;/li&gt;&lt;li&gt;Lookup/insertion uses SIMD to inspect the 16 contiguous buckets beginning at the
hash-mapped position, and then continues with further 16-bucket groups using
quadratic probing. Probing ends when a non-full group is found.
Note that the start positions of these groups are not aligned modulo 16.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;The figure shows:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;the probability that a randomly selected group is full,&lt;/li&gt;&lt;li&gt;the average number of hops (i.e. the average probe length minus one) for
successful and unsuccessful lookup&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;as functions of the load factor, with perfectly random input and without intervening
deletions. Solid line is &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;, dashed line is
&lt;code&gt;absl::flat_hash_map&lt;/code&gt;.&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiclPJEg-PekZgcegdboCcZmMwYx4C6PrYqxwdqJVhfPcRj1Hf6ozHivEcKiVt3T6-jpfJu7ugSJQMrQfb6jVwElWdQueN9GJuwRnQq2i_TibmIL5mAD8x6UhA0Flru5rT7wvhRvWaOYO3VCoF5fFYoEr7f4KE1ZsHDjuGpnn1bq_DqOB9gv3dVLYQB/s804/stats1.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;536&quot; data-original-width=&quot;804&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiclPJEg-PekZgcegdboCcZmMwYx4C6PrYqxwdqJVhfPcRj1Hf6ozHivEcKiVt3T6-jpfJu7ugSJQMrQfb6jVwElWdQueN9GJuwRnQq2i_TibmIL5mAD8x6UhA0Flru5rT7wvhRvWaOYO3VCoF5fFYoEr7f4KE1ZsHDjuGpnn1bq_DqOB9gv3dVLYQB/w500/stats1.png&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;Some observations:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;i&gt;Pr&lt;/i&gt;(group is full) is higher for &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;. This follows
from the fact that free buckets cluster at the end of 15-aligned groups,
whereas for &lt;code&gt;absl::flat_hash_map&lt;/code&gt; free buckets are uniformly distributed across
the array, which increases the probability that a contiguous 16-bucket chunk
contains at least one free position. Consequently, &lt;i&gt;E&lt;/i&gt;(num hops) for successful
lookup is also higher in &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;By contrast, &lt;i&gt;E&lt;/i&gt;(num hops) for &lt;i&gt;unsuccessful&lt;/i&gt; lookup is considerably lower
in &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;: &lt;code&gt;absl::flat_hash_map&lt;/code&gt; uses an all-or-nothing
condition for probe termination (group is non-full/full), whereas
&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; uses the 8 bits of information in the overflow byte
to allow for more finely-grained termination —effectively, making probe termination
~1.75 times more likely. The overflow byte acts as a sort of
&lt;a href=&quot;https://en.wikipedia.org/wiki/Bloom_filter&quot; rel=&quot;nofollow&quot;&gt;Bloom filter&lt;/a&gt; to check for probe
termination based on reduced hash value.&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;The next figure shows the average number of actual comparisons (i.e. when
the reduced hash value matched) for successful and unsuccessful lookup.
Again, solid line is &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; and
dashed line is &lt;code&gt;absl::flat_hash_map&lt;/code&gt;.&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYNDl36h_80BxAs3kOussDg1fzg9iDmU7F5HH4JM0nqyY-lX29osMs2H-aqMOkOeirbLGiyjttIQVGeFRVQcJIQOXDInqHfo1zJlUBf5NI8sGrujOxdj3QIYZ60-5Pl9rnXiYjOpUXBvDtmnCvE_zzngPFW8Aq_gfDUfWzCvH7Qc_vzamP5F-ktpFD/s804/stats2.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;536&quot; data-original-width=&quot;804&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjYNDl36h_80BxAs3kOussDg1fzg9iDmU7F5HH4JM0nqyY-lX29osMs2H-aqMOkOeirbLGiyjttIQVGeFRVQcJIQOXDInqHfo1zJlUBf5NI8sGrujOxdj3QIYZ60-5Pl9rnXiYjOpUXBvDtmnCvE_zzngPFW8Aq_gfDUfWzCvH7Qc_vzamP5F-ktpFD/w500/stats2.png&quot; width=&quot;500&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p dir=&quot;auto&quot;&gt;&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;i&gt;E&lt;/i&gt;(num cmps) is a function of:&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;&lt;i&gt;E&lt;/i&gt;(num hops) (lower better),&lt;/li&gt;&lt;li&gt;the size of the group (lower better),&lt;/li&gt;&lt;li&gt;the number of bits of the reduced hash value (higher better).&lt;/li&gt;&lt;/ul&gt;
&lt;p dir=&quot;auto&quot;&gt;We see then that &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; approaches &lt;code&gt;absl::flat_hash_map&lt;/code&gt; on
&lt;i&gt;E&lt;/i&gt;(num cmps) for successful lookup (1% higher or less), despite its poorer
&lt;i&gt;E&lt;/i&gt;(num hops) figures: this is so because &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;
uses smaller groups (15 vs. 16) and, most importantly, because its reduced
hash values contain log&lt;sub&gt;2&lt;/sub&gt;(254) = 7.99 bits vs. 7 bits in &lt;code&gt;absl::flat_hash_map&lt;/code&gt;,
and each additional bit in the hash reduced value decreases the number of negative
comparisons roughly by half. In the case of &lt;i&gt;E&lt;/i&gt;(num cmps) for unsuccessful lookup,
&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; figures are up to 3.2 times lower under
high-load conditions.&lt;/p&gt;
&lt;a name=&quot;benchmarks&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Benchmarks&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;a name=&quot;running-n-plots&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;&lt;b&gt;Running-&lt;i&gt;n&lt;/i&gt; plots&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;We have measured the execution times of &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; against
&lt;code&gt;absl::flat_hash_map&lt;/code&gt; and &lt;code&gt;boost::unordered_map&lt;/code&gt; for basic operations
(insertion, erasure during iteration, successful lookup, unsuccessful lookup) with
container size &lt;i&gt;n&lt;/i&gt; ranging from 10,000 to 10M. We provide the full benchmark code and
results for different 64- and 32-bit architectures in a
&lt;a href=&quot;https://github.com/boostorg/boost_unordered_benchmarks/tree/boost_unordered_flat_map&quot;&gt;dedicated repository&lt;/a&gt;;
here, we just show the plots for GCC 11 in x64 mode on an
AMD EPYC Rome 7302P @ 3.0GHz.
Please note that each container uses its own default hash function, so a direct
comparison of execution times may be slightly biased.&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;  
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgba397_JysbdpRTRYxpr47OWLiQzZ6bYCprlnKynq0GDYBkZJfYnxuMeHnVgW8js_QbAhpll9NBUsxjiNji9oNLEhnYf3A33GpUGKaWeH_H-OR-vHfPM_6pheB_f2inwWY0vF5G8RhvhGIuErkUwkuVuXBXRkOLbP8Sift7bVXUXjVbEADAI3R9bi2/s698/Running%20insertion.xlsx.plot.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgba397_JysbdpRTRYxpr47OWLiQzZ6bYCprlnKynq0GDYBkZJfYnxuMeHnVgW8js_QbAhpll9NBUsxjiNji9oNLEhnYf3A33GpUGKaWeH_H-OR-vHfPM_6pheB_f2inwWY0vF5G8RhvhGIuErkUwkuVuXBXRkOLbP8Sift7bVXUXjVbEADAI3R9bi2/w275/Running%20insertion.xlsx.plot.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;    
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEif6tMoSlJUh0w5nP3YmBuQSIFmANcwqQfWihGKIea7Zu7F9YCIpGDWzuaqDvqR2ZYgOkRqy9a5H9qww3zCanRAR4D62voz7xHaLYsb6K6CYbsfUHZGfCHsdjXQ15HLo9Jox7QHHSYI621AT8ZyhBI4YtRVKSJ-bpStbCP37hnpypT0jbMogHV0Hqxf/s698/Running%20erasure.xlsx.plot.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEif6tMoSlJUh0w5nP3YmBuQSIFmANcwqQfWihGKIea7Zu7F9YCIpGDWzuaqDvqR2ZYgOkRqy9a5H9qww3zCanRAR4D62voz7xHaLYsb6K6CYbsfUHZGfCHsdjXQ15HLo9Jox7QHHSYI621AT8ZyhBI4YtRVKSJ-bpStbCP37hnpypT0jbMogHV0Hqxf/w275/Running%20erasure.xlsx.plot.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Running insertion&lt;/b&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Running erasure&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;br /&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th align=&quot;center&quot;&gt;     
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtH4Vayx5IYtLfGfVAwmvb8P6RIeHkmA5aXSSfdqbUHqycThNY-BAZrWhEZDCr4fRjB0pJtcrkTF4soDBomMJvkTHokHOzVNRoPVtMBcadBJrXUSRdPacjdoUHKmKSFkTPNPHc-FHgMFBQZIz80EnQ0lCOhhbx1mz5BLtWtyCCbDES_AncbtJCN-Kh/s698/Scattered%20successful%20looukp.xlsx.plot.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhtH4Vayx5IYtLfGfVAwmvb8P6RIeHkmA5aXSSfdqbUHqycThNY-BAZrWhEZDCr4fRjB0pJtcrkTF4soDBomMJvkTHokHOzVNRoPVtMBcadBJrXUSRdPacjdoUHKmKSFkTPNPHc-FHgMFBQZIz80EnQ0lCOhhbx1mz5BLtWtyCCbDES_AncbtJCN-Kh/w275/Scattered%20successful%20looukp.xlsx.plot.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;
&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6WPYYIje2lL47xId8ysqYk2KBYPQYGbO0ro4VXFdh1zqFr3ZNJQEzHmW2gBbjKhTCUon8vAb1l92_906ZwqWgWNpNZhAWJrlz4HWIgqCZFkFNqx0qDtuwBjbcdBkEx0ZeNs6mhW4yJ_1NVQepkFEegmwd-6vTf8xON5iIGQr-tH5jCb1jh6iHIaAc/s698/Scattered%20unsuccessful%20looukp.xlsx.plot.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; data-original-height=&quot;449&quot; data-original-width=&quot;698&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi6WPYYIje2lL47xId8ysqYk2KBYPQYGbO0ro4VXFdh1zqFr3ZNJQEzHmW2gBbjKhTCUon8vAb1l92_906ZwqWgWNpNZhAWJrlz4HWIgqCZFkFNqx0qDtuwBjbcdBkEx0ZeNs6mhW4yJ_1NVQepkFEegmwd-6vTf8xON5iIGQr-tH5jCb1jh6iHIaAc/w275/Scattered%20unsuccessful%20looukp.xlsx.plot.png&quot; width=&quot;275&quot; /&gt;&lt;/a&gt;&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Successful lookup&lt;/b&gt;&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;&lt;b&gt;Unsuccessful lookup&lt;/b&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p dir=&quot;auto&quot;&gt;As predicted by our statistical analysis, &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; is
considerably faster than &lt;code&gt;absl::flat_hash_map&lt;/code&gt; for unsuccessful lookup
because the average probe length and number of (negative) comparisons are
much lower; this effect translates also to insertion, since &lt;code&gt;insert&lt;/code&gt; needs
to first check that the element is not present, so it internally performs an
unsuccessful lookup. Note how performance is less impacted (stays flatter)
when the load factor increases.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;As for successful lookup, &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; is still faster, which may be
due to its better cache locality, particularly for low load factors: in this
situation, elements are clustered at the beginning portion of each group, while for
&lt;code&gt;absl::flat_hash_map&lt;/code&gt; they are uniformly distributed with more empty space
in between.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; is slower than &lt;code&gt;absl::flat_hash_map&lt;/code&gt; for runnning
erasure (erasure of some elements during container traversal). The actual culprit
here is iteration, which is particularly slow; this is a collateral effect
of having SIMD operations work only on 16-aligned metadata words, while
&lt;code&gt;absl::flat_hash_map&lt;/code&gt; iteration looks ahead 16 metadata bytes beyond the current
iterator position.&lt;/p&gt;
&lt;a name=&quot;aggregate-performance&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: medium;&quot;&gt;Aggregate performance&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;Boost.Unordered provides a series of &lt;a href=&quot;https://github.com/boostorg/unordered/tree/develop/benchmark&quot;&gt;benchmarks&lt;/a&gt;
emulating real-life scenarios combining several operations for a number of
hash containers and key types (&lt;code&gt;std::string&lt;/code&gt;, &lt;code&gt;std::string_view&lt;/code&gt;, &lt;code&gt;std::uint32_t&lt;/code&gt;,
&lt;code&gt;std::uint64_t&lt;/code&gt; and a UUID class of size 16). The interested reader can
build and run the benchmarks on her environment of choice; as an example, these are
the results for GCC 11 in x64 mode on an Intel Xeon E5-2683 @ 2.10GHz:&lt;/p&gt;
&lt;pre&gt;&lt;span style=&quot;font-size: small;&quot;&gt;&lt;b&gt;std::string&lt;/b&gt;
               std::unordered_map: 38021 ms, 175723032 bytes in 3999509 allocations
             boost::unordered_map: 30785 ms, 149465712 bytes in 3999510 allocations
        boost::unordered_flat_map: 14486 ms, 134217728 bytes in 1 allocations
                  multi_index_map: 30162 ms, 178316048 bytes in 3999510 allocations
              absl::node_hash_map: 15403 ms, 139489608 bytes in 3999509 allocations
              absl::flat_hash_map: 13018 ms, 142606336 bytes in 1 allocations
       std::unordered_map, FNV-1a: 43893 ms, 175723032 bytes in 3999509 allocations
     boost::unordered_map, FNV-1a: 33730 ms, 149465712 bytes in 3999510 allocations
boost::unordered_flat_map, FNV-1a: 15541 ms, 134217728 bytes in 1 allocations
          multi_index_map, FNV-1a: 33915 ms, 178316048 bytes in 3999510 allocations
      absl::node_hash_map, FNV-1a: 20701 ms, 139489608 bytes in 3999509 allocations
      absl::flat_hash_map, FNV-1a: 18234 ms, 142606336 bytes in 1 allocations
&lt;/span&gt;&lt;hr /&gt;&lt;span style=&quot;font-size: small;&quot;&gt;&lt;b&gt;std::string_view&lt;/b&gt;
               std::unordered_map: 38481 ms, 207719096 bytes in 3999509 allocations
             boost::unordered_map: 26066 ms, 181461776 bytes in 3999510 allocations
        boost::unordered_flat_map: 14923 ms, 197132280 bytes in 1 allocations
                  multi_index_map: 27582 ms, 210312120 bytes in 3999510 allocations
              absl::node_hash_map: 14670 ms, 171485672 bytes in 3999509 allocations
              absl::flat_hash_map: 12966 ms, 209715192 bytes in 1 allocations
       std::unordered_map, FNV-1a: 45070 ms, 207719096 bytes in 3999509 allocations
     boost::unordered_map, FNV-1a: 29148 ms, 181461776 bytes in 3999510 allocations
boost::unordered_flat_map, FNV-1a: 15397 ms, 197132280 bytes in 1 allocations
          multi_index_map, FNV-1a: 30371 ms, 210312120 bytes in 3999510 allocations
      absl::node_hash_map, FNV-1a: 19251 ms, 171485672 bytes in 3999509 allocations
      absl::flat_hash_map, FNV-1a: 17622 ms, 209715192 bytes in 1 allocations
&lt;/span&gt;&lt;hr /&gt;&lt;span style=&quot;font-size: small;&quot;&gt;&lt;b&gt;std::uint32_t&lt;/b&gt;
       std::unordered_map: 21297 ms, 192888392 bytes in 5996681 allocations
     boost::unordered_map:  9423 ms, 149424400 bytes in 5996682 allocations
boost::unordered_flat_map:  4974 ms,  71303176 bytes in 1 allocations
          multi_index_map: 10543 ms, 194252104 bytes in 5996682 allocations
      absl::node_hash_map: 10653 ms, 123470920 bytes in 5996681 allocations
      absl::flat_hash_map:  6400 ms,  75497480 bytes in 1 allocations
&lt;/span&gt;&lt;hr /&gt;&lt;span style=&quot;font-size: small;&quot;&gt;&lt;b&gt;std::uint64_t&lt;/b&gt;
       std::unordered_map: 21463 ms, 240941512 bytes in 6000001 allocations
     boost::unordered_map: 10320 ms, 197477520 bytes in 6000002 allocations
boost::unordered_flat_map:  5447 ms, 134217728 bytes in 1 allocations
          multi_index_map: 13267 ms, 242331792 bytes in 6000002 allocations
      absl::node_hash_map: 10260 ms, 171497480 bytes in 6000001 allocations
      absl::flat_hash_map:  6530 ms, 142606336 bytes in 1 allocations
&lt;/span&gt;&lt;hr /&gt;&lt;span style=&quot;font-size: small;&quot;&gt;&lt;b&gt;uuid&lt;/b&gt;
       std::unordered_map: 37338 ms, 288941512 bytes in 6000001 allocations
     boost::unordered_map: 24638 ms, 245477520 bytes in 6000002 allocations
boost::unordered_flat_map:  9223 ms, 197132280 bytes in 1 allocations
          multi_index_map: 25062 ms, 290331800 bytes in 6000002 allocations
      absl::node_hash_map: 14005 ms, 219497480 bytes in 6000001 allocations
      absl::flat_hash_map: 10559 ms, 209715192 bytes in 1 allocations&lt;/span&gt;
&lt;/pre&gt;
&lt;p dir=&quot;auto&quot;&gt;Each container uses its own default hash function, except the entries labeled
&lt;code&gt;FNV-1a&lt;/code&gt; in &lt;code&gt;std::string&lt;/code&gt; and &lt;code&gt;std::string_view&lt;/code&gt;, which use the same
implementation of
&lt;a href=&quot;https://en.wikipedia.org/wiki/Fowler%E2%80%93Noll%E2%80%93Vo_hash_function#FNV-1a_hash&quot; rel=&quot;nofollow&quot;&gt;Fowler–Noll–Vo hash, version 1a&lt;/a&gt;,
and &lt;code&gt;uuid&lt;/code&gt;, where all containers use the same user-provided function based on
&lt;a href=&quot;https://www.boost.org/libs/container_hash/doc/html/hash.html#ref_hash_combine&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;boost::hash_combine&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
&lt;a name=&quot;deviations-from-the-standard&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Deviations from the standard&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;The adoption of open addressing imposes a number of deviations from the C++
standard for unordered associative containers. Users should keep them in mind
when migrating to &lt;code&gt;boost::unordered_flat_map&lt;/code&gt; from &lt;code&gt;boost::unordered_map&lt;/code&gt; (or
from any other implementation of &lt;code&gt;std::unordered_map&lt;/code&gt;):&lt;/p&gt;
&lt;ul dir=&quot;auto&quot;&gt;&lt;li&gt;Both &lt;code&gt;Key&lt;/code&gt; and &lt;code&gt;T&lt;/code&gt; in &lt;code&gt;boost::unordered_flat_map&amp;lt;Key,T&amp;gt;&lt;/code&gt; must be
&lt;a href=&quot;https://en.cppreference.com/w/cpp/named_req/MoveConstructible&quot; rel=&quot;nofollow&quot;&gt;MoveConstructible&lt;/a&gt;.
This is due to the fact that elements are stored directly into the bucket array and
have to be transferred to a new block of memory on rehashing; by contrast,
&lt;code&gt;boost::unordered_map&lt;/code&gt; is a &lt;i&gt;node-based&lt;/i&gt; container and elements are never moved
once constructed.&lt;/li&gt;&lt;li&gt;For the same reason, pointers and references to elements become invalid after
rehashing (&lt;code&gt;boost::unordered_map&lt;/code&gt; only invalidates iterators).&lt;/li&gt;&lt;li&gt;&lt;code&gt;begin()&lt;/code&gt; is not constant-time (the bucket array is traversed till the first
non-empty bucket is found).&lt;/li&gt;&lt;li&gt;&lt;code&gt;erase(iterator)&lt;/code&gt; returns &lt;code&gt;void&lt;/code&gt; rather than an iterator to the element
after the erased one. This is done to maximize performance, as locating the
next element requires traversing the bucket array; if that element is absolutely
required, the &lt;code&gt;erase(iterator++)&lt;/code&gt; idiom can be used. This performance issue
is not exclusive to open addressing, and has been
&lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2023.pdf&quot; rel=&quot;nofollow&quot;&gt;discussed&lt;/a&gt;
in the context of the C++ standard too. (&lt;b&gt;Update Oct 19, 2024:&lt;/b&gt; This limitation has been &lt;a href=&quot;https://www.boost.org/libs/unordered/doc/html/unordered/changes.html#changes_release_1_83_0_major_update&quot;&gt;partially solved&lt;/a&gt;.)&lt;br /&gt;&lt;/li&gt;&lt;li&gt;The maximum load factor can&#39;t be changed by the user (&lt;code&gt;max_load_factor(z)&lt;/code&gt; is
provided for backwards compatibility reasons, but does nothing). Rehashing
can occur &lt;i&gt;before&lt;/i&gt; the load reaches &lt;code&gt;max_load_factor() * bucket_count()&lt;/code&gt; due
to the anti-drift mechanism described previously.&lt;/li&gt;&lt;li&gt;There is no bucket API (&lt;code&gt;bucket_size&lt;/code&gt;, &lt;code&gt;begin(n)&lt;/code&gt;, etc.) save &lt;code&gt;bucket_count&lt;/code&gt;.&lt;/li&gt;&lt;li&gt;There are no node handling facilities (&lt;a href=&quot;https://en.cppreference.com/w/cpp/container/unordered_map/extract&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;extract&lt;/code&gt;&lt;/a&gt;, etc.)
Such functionality makes no sense here as open-addressing containers are precisely
&lt;i&gt;not&lt;/i&gt; node-based. &lt;a href=&quot;https://en.cppreference.com/w/cpp/container/unordered_map/merge&quot; rel=&quot;nofollow&quot;&gt;&lt;code&gt;merge&lt;/code&gt;&lt;/a&gt;
is provided, but the implementation relies on element movement rather than node
transferring.&lt;/li&gt;&lt;/ul&gt;
&lt;a name=&quot;conclusions-and-next-steps&quot;&gt;&lt;div style=&quot;text-align: left;&quot;&gt;&lt;b&gt;&lt;span style=&quot;font-size: large;&quot;&gt;Conclusions and next steps&lt;/span&gt;&lt;/b&gt;&lt;/div&gt;&lt;/a&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;boost::unordered_flat_map&lt;/code&gt; and &lt;code&gt;boost::unordered_flat_set&lt;/code&gt; are the new
open-addressing containers in Boost.Unordered providing top speed
in exchange for some interface and behavioral deviations from the standards-compliant
&lt;code&gt;boost::unordered_map&lt;/code&gt; and &lt;code&gt;boost::unordered_set&lt;/code&gt;. We have analyzed their
internal data structure and provided some theoretical and practical evidence
for their excellent performance. As of this writing, we claim
&lt;code&gt;boost::unordered_flat_map&lt;/code&gt;/&lt;code&gt;boost::unordered_flat_set&lt;/code&gt;
to rank among the fastest hash containers available to C++ programmers.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;With this work, we have reached an important milestone in the ongoing
&lt;a href=&quot;https://pdimov.github.io/articles/unordered_dev_plan.html&quot; rel=&quot;nofollow&quot;&gt;Development Plan for Boost.Unordered&lt;/a&gt;.
After Boost 1.81, we will continue improving the functionality
and performance of existing containers and will possibly augment the available
container catalog to offer greater freedom of choice to Boost users.
Your feedback on our current and future work is much welcome.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/6276904501422653074/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2022/11/inside-boostunorderedflatmap.html#comment-form' title='6 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/6276904501422653074'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/6276904501422653074'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2022/11/inside-boostunorderedflatmap.html' title='Inside &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgCm7Mjoizsg_qsNOTa1nU3YIjPDdarUGBnuxC9eiMMXtR4zWnnWXLQLE3RGgXN203SLJcIJfGM7a25uOLapGYJtmcOIeU8yXkrhDM1bK1LmxYESoY_fPohrGtPQuggjHgivzVWDVfqND1ZvYv1MSBexyBMyZFFyN7VPsRt5opZfLlo1m2rZyLEpQ0_/s72-c/probe.png" height="72" width="72"/><thr:total>6</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-5681113432062027975</id><published>2022-10-02T18:14:00.007+02:00</published><updated>2022-10-04T17:59:08.209+02:00</updated><title type='text'>Deferred argument evaluation</title><content type='html'>&lt;p dir=&quot;auto&quot;&gt;Suppose our program deals with heavy entities of some type &lt;code&gt;object&lt;/code&gt; which are
uniquely identified by an integer ID. The following is a possible implementation
of a function that controls ID-constrained creation of such objects:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;object* &lt;span class=&quot;pl-en&quot;&gt;retrieve_or_create&lt;/span&gt;(&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt; id)
{
  &lt;span class=&quot;pl-k&quot;&gt;static&lt;/span&gt; std::unordered_map&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt;, std::unique_ptr&amp;lt;object&amp;gt;&amp;gt; m;

  &lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; see if the object is already in the map&lt;br /&gt;  &lt;/span&gt;&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt; [it,b] = m.&lt;span class=&quot;pl-c1&quot;&gt;emplace&lt;/span&gt;(id, &lt;span class=&quot;pl-c1&quot;&gt;nullptr&lt;/span&gt;);&lt;br /&gt;  &lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; create it otherwise&lt;/span&gt;
  &lt;span class=&quot;pl-k&quot;&gt;if&lt;/span&gt;(b) it-&amp;gt;&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt; = std::make_unique&amp;lt;object&amp;gt;(id); 
  &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; it-&amp;gt;&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt;.&lt;span class=&quot;pl-c1&quot;&gt;get&lt;/span&gt;();
}&lt;/pre&gt;&lt;/div&gt;
&lt;p dir=&quot;auto&quot;&gt;Note that the code is careful not to create a spurious object if
an equivalent one already exists; but in doing so, we have introduced a
potentially inconsistency in the internal map if object creation throws:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; fixed version&lt;/span&gt;

object* &lt;span class=&quot;pl-en&quot;&gt;retrieve_or_create&lt;/span&gt;(&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt; id)
{
  &lt;span class=&quot;pl-k&quot;&gt;static&lt;/span&gt; std::unordered_map&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt;, std::unique_ptr&amp;lt;object&amp;gt;&amp;gt; m;

  &lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; see if the object is already in the map&lt;br /&gt;  &lt;/span&gt;&lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt; [it,b] = m.&lt;span class=&quot;pl-c1&quot;&gt;emplace&lt;/span&gt;(id, &lt;span class=&quot;pl-c1&quot;&gt;nullptr&lt;/span&gt;); 
  &lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; create it otherwise&lt;br /&gt;  &lt;/span&gt;&lt;span class=&quot;pl-k&quot;&gt;if&lt;/span&gt;(b){ 
    &lt;span class=&quot;pl-k&quot;&gt;try&lt;/span&gt;{
      it-&amp;gt;&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt; = std::make_unique&amp;lt;object&amp;gt;(id);
    }
    &lt;span class=&quot;pl-k&quot;&gt;catch&lt;/span&gt;(...){&lt;br /&gt;      &lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; we can get here when running out of memory, for instance&lt;/span&gt;
      m.&lt;span class=&quot;pl-c1&quot;&gt;erase&lt;/span&gt;(it);
      &lt;span class=&quot;pl-k&quot;&gt;throw&lt;/span&gt;;
    }
  }
  &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; it-&amp;gt;&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt;.&lt;span class=&quot;pl-c1&quot;&gt;get&lt;/span&gt;();
}&lt;/pre&gt;&lt;/div&gt;
&lt;p dir=&quot;auto&quot;&gt;This fixed version is a little cumbersome, to say the least. Starting in C++17,
we can use &lt;code&gt;try_emplace&lt;/code&gt; to rewrite &lt;code&gt;retrieve_or_create&lt;/code&gt; as  follows:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;object* &lt;span class=&quot;pl-en&quot;&gt;retrieve_or_create&lt;/span&gt;(&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt; id)
{
  &lt;span class=&quot;pl-k&quot;&gt;static&lt;/span&gt; std::unordered_map&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt;, std::unique_ptr&amp;lt;object&amp;gt;&amp;gt; m;

  &lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt; [it,b] = m.&lt;span class=&quot;pl-c1&quot;&gt;try_emplace&lt;/span&gt;(id, std::make_unique&amp;lt;object&amp;gt;(id));
  &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; it-&amp;gt;&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt;.&lt;span class=&quot;pl-c1&quot;&gt;get&lt;/span&gt;();
}&lt;/pre&gt;&lt;/div&gt;
&lt;p dir=&quot;auto&quot;&gt;But then we&#39;ve introduced the problem of spurious object creation we strived to
avoid. Ideally, we&#39;d like for &lt;code&gt;try_emplace&lt;/code&gt; to &lt;b&gt;not&lt;/b&gt; create the object except when really needed.
What we&#39;re effectively asking for is some sort of technique for
&lt;i&gt;deferred argument evaluation&lt;/i&gt;. As it happens, it is very easy to devise our own:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-k&quot;&gt;template&lt;/span&gt;&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;typename&lt;/span&gt; F&amp;gt;
&lt;span class=&quot;pl-k&quot;&gt;struct&lt;/span&gt; &lt;span class=&quot;pl-en&quot;&gt;deferred_call&lt;/span&gt;
{
  &lt;span class=&quot;pl-k&quot;&gt;using&lt;/span&gt; result_type=decltype(std::declval&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;const&lt;/span&gt; F&amp;gt;()());
  &lt;span class=&quot;pl-k&quot;&gt;operator&lt;/span&gt; &lt;span class=&quot;pl-en&quot;&gt;result_type&lt;/span&gt;() &lt;span class=&quot;pl-k&quot;&gt;const&lt;/span&gt; { &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;pl-c1&quot;&gt;f&lt;/span&gt;(); }

  F f;
};

object* &lt;span class=&quot;pl-en&quot;&gt;retrieve_or_create&lt;/span&gt;(&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt; id)
{
  &lt;span class=&quot;pl-k&quot;&gt;static&lt;/span&gt; std::unordered_map&amp;lt;&lt;span class=&quot;pl-k&quot;&gt;int&lt;/span&gt;, std::unique_ptr&amp;lt;object&amp;gt;&amp;gt; m;

  &lt;span class=&quot;pl-k&quot;&gt;auto&lt;/span&gt; [it,b] = m.&lt;span class=&quot;pl-c1&quot;&gt;try_emplace&lt;/span&gt;(
    id,
    &lt;span class=&quot;pl-c1&quot;&gt;deferred_call&lt;/span&gt;([&amp;amp;]{ &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; std::make_unique&amp;lt;object&amp;gt;(id); }));
  &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; it-&amp;gt;&lt;span class=&quot;pl-smi&quot;&gt;second&lt;/span&gt;.&lt;span class=&quot;pl-c1&quot;&gt;get&lt;/span&gt;();
}&lt;/pre&gt;&lt;div class=&quot;zeroclipboard-container position-absolute right-0 top-0&quot;&gt;
    
      
    

    
  &lt;/div&gt;&lt;/div&gt;
&lt;p dir=&quot;auto&quot;&gt;&lt;code&gt;deferred_call&lt;/code&gt; is a small utlity that computes a value upon request of
conversion to &lt;code&gt;deferred_call::result_type&lt;/code&gt;. In the example, such conversion will only happen if
&lt;code&gt;try_emplace&lt;/code&gt; really needs to create a &lt;code&gt;std::pair&amp;lt;const int, std::unique_ptr&amp;lt;object&amp;gt;&amp;gt;&lt;/code&gt;, that is,
if no equivalent object was already present in the map.&lt;/p&gt;
&lt;p dir=&quot;auto&quot;&gt;In a general setting, for &lt;code&gt;deferred_call&lt;/code&gt; to work as expected, that is, to delay producing
the value until the point of actual usage, the following conditions must be met:&lt;/p&gt;
&lt;ol dir=&quot;auto&quot;&gt;&lt;li&gt;The &lt;code&gt;deferred_call&lt;/code&gt; object is passed to function/constructor template
accepting generic, unconstrained parameters.&lt;/li&gt;&lt;li&gt;All internal intermediate interfaces are also generic.&lt;/li&gt;&lt;li&gt;The final function/constructor where actual usage happens asks exactly for a
&lt;code&gt;deferred_call::result_type&lt;/code&gt; value or reference.&lt;/li&gt;&lt;/ol&gt;
&lt;p dir=&quot;auto&quot;&gt;It is the last condition that can be the most problematic:&lt;/p&gt;
&lt;div class=&quot;highlight highlight-source-c++ notranslate position-relative overflow-auto&quot; dir=&quot;auto&quot;&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-k&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(std::string);
    
&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; error: deferred_call not convertible to std::string&lt;br /&gt;&lt;/span&gt;&lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(deferred_call([]{ &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;pl-s&quot;&gt;&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;hello&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;&lt;/span&gt;; })); &lt;/pre&gt;&lt;/div&gt;
&lt;p dir=&quot;auto&quot;&gt;C++ rules for conversion alows just &lt;b&gt;one&lt;/b&gt; user-defined conversion to take place at most,
and here we are calling for the sequence &lt;code&gt;deferred_call&lt;/code&gt; → &lt;code&gt;const char*&lt;/code&gt;  → &lt;code&gt;std::string&lt;/code&gt;.
In this case, however, the fix is trivial:&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-k&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(std::string);

&lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(deferred_call([]{ &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;pl-c1&quot;&gt;std::string&lt;/span&gt;(&lt;span class=&quot;pl-s&quot;&gt;&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;hello&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;&lt;/span&gt;); }));&amp;nbsp;&lt;/pre&gt;&lt;p&gt;&lt;b&gt;Update Oct 4&lt;/b&gt;&lt;/p&gt;&lt;p&gt;Jessy De Lannoit &lt;a href=&quot;https://www.reddit.com/r/cpp/comments/xtsre3/deferred_argument_evaluation/iqvn8ag/&quot;&gt;proposes&lt;/a&gt; a variation on &lt;code&gt;deferred_call&lt;/code&gt; that solves the problem of producing a value that is one user-defined conversion away from the target type:&lt;br /&gt;&lt;/p&gt;

&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename F&amp;gt;&lt;br /&gt;struct deferred_call&lt;br /&gt;{&lt;br /&gt;  using result_type=decltype(std::declval&amp;lt;const F&amp;gt;()());&lt;br /&gt;  operator result_type() const { return f(); }&lt;br /&gt;&lt;br /&gt;  template&amp;lt;typename T&amp;gt;&lt;br /&gt;  requires (std::is_constructible_v&amp;lt;T, result_type&amp;gt;)&lt;br /&gt;  constexpr operator T() const { return {f()}; }&lt;br /&gt;  &lt;br /&gt;  F f;&lt;br /&gt;};&lt;br /&gt;&lt;br /&gt;&lt;span class=&quot;pl-k&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(std::string);
    
&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; works ok: deferred_call converts to std::string&lt;br /&gt;&lt;/span&gt;&lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(deferred_call([]{ &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;pl-s&quot;&gt;&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;hello&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;&lt;/span&gt;; })); 
&lt;/pre&gt;&lt;p&gt;

This version of &lt;code&gt;deferred_call&lt;/code&gt; has an eager conversion operator producing any requested value as long&amp;nbsp; as it is constructible from &lt;code&gt;deferred_call::result_type&lt;/code&gt;. The solution comes with a different set of problems, though: &lt;/p&gt;&lt;pre class=&quot;prettyprint&quot;&gt;&lt;span class=&quot;pl-k&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(std::string);&lt;br /&gt;void f(const char*);
    
&lt;span class=&quot;pl-c&quot;&gt;&lt;span class=&quot;pl-c&quot;&gt;//&lt;/span&gt; ambiguous call to f&lt;br /&gt;&lt;/span&gt;&lt;span class=&quot;pl-en&quot;&gt;f&lt;/span&gt;(deferred_call([]{ &lt;span class=&quot;pl-k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;pl-s&quot;&gt;&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;hello&lt;span class=&quot;pl-pds&quot;&gt;&quot;&lt;/span&gt;&lt;/span&gt;; })); 
&lt;/pre&gt;There is probably little more we can do without language support. One can imagine some sort of &quot;silent&quot; conversion operator that does not add to the cap on user-defined conversions allowed by the rules of C++:&lt;br /&gt;&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename F&amp;gt;&lt;br /&gt;struct deferred_call&lt;br /&gt;{&lt;br /&gt;  using result_type=decltype(std::declval&amp;lt;const F&amp;gt;()());&lt;br /&gt;  operator result_type() const { return f(); }&lt;br /&gt;&lt;br /&gt;  // &quot;silent&quot; conversion operator marked with ~explicit&lt;br /&gt;  // (not actual C++)&lt;br /&gt;  template&amp;lt;typename T&amp;gt;&lt;br /&gt;  requires (std::is_constructible_v&amp;lt;T, result_type&amp;gt;)&lt;br /&gt;  ~explicit constexpr operator T() const { return {f()}; }&lt;br /&gt;  &lt;br /&gt;  F f;&lt;br /&gt;};&lt;/pre&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/5681113432062027975/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2022/10/deferred-argument-evaluation.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5681113432062027975'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5681113432062027975'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2022/10/deferred-argument-evaluation.html' title='Deferred argument evaluation'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-5329302977245474327</id><published>2022-06-18T21:16:00.009+02:00</published><updated>2022-06-20T13:43:47.618+02:00</updated><title type='text'>Advancing the state of the art for std::unordered_map implementations</title><content type='html'>&lt;h2 style=&quot;text-align: left;&quot;&gt;Introduction&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;Several Boost authors have embarked on a &lt;a href=&quot;https://pdimov.github.io/articles/unordered_dev_plan.html&quot; rel=&quot;nofollow&quot;&gt;project&lt;/a&gt;
to improve the performance of &lt;a href=&quot;https://www.boost.org/doc/libs/release/libs/unordered/&quot; rel=&quot;nofollow&quot;&gt;Boost.Unordered&lt;/a&gt;&#39;s
implementation of &lt;code&gt;std::unordered_map&lt;/code&gt; (and &lt;code&gt;multimap&lt;/code&gt;, &lt;code&gt;set&lt;/code&gt; and &lt;code&gt;multiset&lt;/code&gt; variants),
and to extend its portfolio of available containers to offer faster, non-standard
alternatives based on open addressing.&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;The first goal of the project has been completed in time for Boost 1.80 (due August 2022). We
describe here the technical innovations introduced in &lt;code&gt;boost::unordered_map&lt;/code&gt;
that makes it the fastest implementation of &lt;code&gt;std::unordered_map&lt;/code&gt; on the market.&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot;&gt;Closed vs. open addressing&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;On a first approximation, hash table implementations fall on either of two general classes:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;i&gt;Closed addressing&lt;/i&gt; (also known as &lt;a href=&quot;https://en.wikipedia.org/wiki/Hash_table#Separate_chaining&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;separate chaining&lt;/i&gt;&lt;/a&gt;)
relies on an array of &lt;i&gt;buckets&lt;/i&gt;, each of which points to a list of elements belonging to it.
When a new element goes to an already occupied bucket, it is simply linked to the
associated element list.
The figure depicts what we call the &lt;i&gt;textbook implementation&lt;/i&gt; of closed addressing, arguably
the simplest layout, and among the fastest, for this type of hash tables.&lt;/li&gt;&lt;/ul&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDSQQoqH5c7LuFi_SzM70cCioPxDCUmtuUyb0aPkcjDCfMg_65498faqsJtxZ2mBlzpmdFwowHxXdTUzjqEtbK-fIeaZR9y26CXD8zXE4V89VJDUjZG9cRRhGyxrWiEKYa29qU78_zVa9wKD60tyGUIwImqKxkkYLXfjuio87uU3-fJwxXN9WuFuiH/s713/bucket-groups.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img alt=&quot;textbook layout&quot; border=&quot;0&quot; data-original-height=&quot;221&quot; data-original-width=&quot;713&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDSQQoqH5c7LuFi_SzM70cCioPxDCUmtuUyb0aPkcjDCfMg_65498faqsJtxZ2mBlzpmdFwowHxXdTUzjqEtbK-fIeaZR9y26CXD8zXE4V89VJDUjZG9cRRhGyxrWiEKYa29qU78_zVa9wKD60tyGUIwImqKxkkYLXfjuio87uU3-fJwxXN9WuFuiH/s713/bucket-groups.png&quot; width=&quot;100%&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;a href=&quot;https://en.wikipedia.org/wiki/Hash_table#Open_addressing&quot; rel=&quot;nofollow&quot;&gt;&lt;i&gt;Open addressing&lt;/i&gt;&lt;/a&gt;
(or &lt;i&gt;closed hashing&lt;/i&gt;) stores at most one element in each bucket (sometimes called a &lt;i&gt;slot&lt;/i&gt;).
When an element goes to an already occupied slot, some
&lt;i&gt;probing&lt;/i&gt; mechanism is used to locate an available slot, preferrably close to the original one.&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;Recent, high-performance hash tables use open addressing and leverage on
its inherently better cache locality and on widely available
&lt;a href=&quot;https://en.wikipedia.org/wiki/Single_instruction,_multiple_data&quot; rel=&quot;nofollow&quot;&gt;SIMD&lt;/a&gt; operations.
Closed addressing provides some functional advantages, though, and
remains relevant as the required foundation for the implementation
of &lt;code&gt;std::unodered_map&lt;/code&gt;.&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot;&gt;Restrictions on the implementation of &lt;code&gt;std::unordered_map&lt;/code&gt;&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;The standardization of C++ unordered associative containers is based on Matt Austern&#39;s 2003
&lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2003/n1456.html&quot; rel=&quot;nofollow&quot;&gt;N1456&lt;/a&gt; paper.
Back in the day, open-addressing approaches were not regarded as sufficiently mature,
so closed addressing was taken as the safe implementation of choice. Even though the
C++ standard does not explicitly require that closed addressing must be used, the
assumption that this is the case leaks through the public interface of &lt;code&gt;std::unordered_map&lt;/code&gt;:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;A bucket API is provided.&lt;/li&gt;&lt;li&gt;Pointer stability implies that the container is node-based. In C++17, this implication
was made explicit with the introduction of &lt;code&gt;extract&lt;/code&gt; capabilities.&lt;/li&gt;&lt;li&gt;Users can control the container load factor.&lt;/li&gt;&lt;li&gt;Requirements on the hash function are very lax (open addressing depends on high-quality
hash functions with the ability to spread keys widely across the space of &lt;code&gt;std::size_t&lt;/code&gt;
values.)&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;As a result, all standard library implementations use some form of closed addressing
for the internal structure of their &lt;code&gt;std::unordered_map&lt;/code&gt; (and related containers).&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;Coming as an additional difficulty, there are two complexity requirements:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;iterator increment must be (amortized) constant time,&lt;/li&gt;&lt;li&gt;&lt;code&gt;erase&lt;/code&gt; must be constant time on average,&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;that rule out the textbook implementation of closed addressing (see &lt;a href=&quot;https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2006/n2023.pdf&quot; rel=&quot;nofollow&quot;&gt;N2023&lt;/a&gt;
for details). To cope with this problem,
standard libraries depart from the textbook layout in ways that introduce speed and memory
penalties: this is, for instance, how libstdc++-v3 and libc++ layouts look like:&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhLPOMW-s6sNwyOo0lHoDCEEicJ78kiN5ZvR82GSYL8HiRwYgtXYd_nAjLpM4DZo16EH2WRP1ApbrYAU5C0LsmJfGL2beVtoTnMLjxyfv8erOGWY6lUDfUPwc93ie9gW6s4VPST-Tk3wkkZSMcYpIVToxrbhe1-F5LsXLet2h14KfvCKWLpzikS6Po/s481/singly-linked.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img alt=&quot;libstdc++-v3/libc++ layout&quot; border=&quot;0&quot; data-original-height=&quot;252&quot; data-original-width=&quot;481&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjhLPOMW-s6sNwyOo0lHoDCEEicJ78kiN5ZvR82GSYL8HiRwYgtXYd_nAjLpM4DZo16EH2WRP1ApbrYAU5C0LsmJfGL2beVtoTnMLjxyfv8erOGWY6lUDfUPwc93ie9gW6s4VPST-Tk3wkkZSMcYpIVToxrbhe1-F5LsXLet2h14KfvCKWLpzikS6Po/s481/singly-linked.png&quot; width=&quot;70%&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;To provide constant iterator increment, all nodes are linked together, which in its turn
forces two adjustments to the data structure:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Buckets point to the node &lt;i&gt;before&lt;/i&gt; the first one
in the bucket so as to preserve constant-time erasure.&lt;/li&gt;&lt;li&gt;To detect the end of a bucket, the element hash value is added as a data member of
the node itself (libstdc++-v3 opts for on-the-fly hash calculation under some
circumstances).&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;Visual Studio standard library (formerly from Dinkumware) uses an entirely different
approach to circumvent the problem, but the general outcome is that resulting data
structures perform significantly worse than the textbook layout in terms of speed,
memory consumption, or both.&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot;&gt;Boost.Unordered 1.80 data layout&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;The new data layout used by Boost.Unordered goes back to the textbook approach:&lt;/p&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdpstbRFaPO9Xe_kWyiVK_cRRiAnOPwkb5t-WlgV8lqP86CBDBvedNogygLsojv_5rERgE1YZ31wmTjp9tJKy3oXcmN9AVTZwrhg1wPg4EEkTt51KF0CJ7bvXeaSFufSQkuQcX-N_3byNVIhXgnXnP1wmAXCY71FOlkRuJGQbfdSYN1QsfuJTdpMrw/s771/fca.png&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img alt=&quot;Boost.Unordered layout&quot; border=&quot;0&quot; data-original-height=&quot;326&quot; data-original-width=&quot;771&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdpstbRFaPO9Xe_kWyiVK_cRRiAnOPwkb5t-WlgV8lqP86CBDBvedNogygLsojv_5rERgE1YZ31wmTjp9tJKy3oXcmN9AVTZwrhg1wPg4EEkTt51KF0CJ7bvXeaSFufSQkuQcX-N_3byNVIhXgnXnP1wmAXCY71FOlkRuJGQbfdSYN1QsfuJTdpMrw/s771/fca.png&quot; width=&quot;100%&quot; /&gt;&lt;/a&gt;&lt;/div&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;Unlike the rest of standard library implementations, nodes are not linked across the
container but only within each bucket. This makes constant-time &lt;code&gt;erase&lt;/code&gt; trivially
implementable, but leaves unsolved the problem of constant-time iterator increment: to
achieve it, we introduce so-called &lt;i&gt;bucket groups&lt;/i&gt; (top of the diagram). Each bucket
group consists of a 32/64-bit bucket occupancy mask plus &lt;code&gt;next&lt;/code&gt; and &lt;code&gt;prev&lt;/code&gt; pointers linking non-empty
bucket groups together. Iteration across buckets resorts to a
combination of bit manipulation operations on the bitmasks plus group traversal through
&lt;code&gt;next&lt;/code&gt; pointers, which is not only constant time but also very lightweight in terms
of execution time and of memory overhead (4 bits per bucket).&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot;&gt;Fast modulo&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;When inserting or looking for an element, hash table implementations need to map the element hash
value into the array of buckets (or slots in the open-addressing case). There
are two general approaches in common use:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Bucket array sizes follow a sequence of prime numbers &lt;i&gt;p&lt;/i&gt;, and mapping is of the form
&lt;i&gt;h&lt;/i&gt; → &lt;i&gt;h&lt;/i&gt; mod &lt;i&gt;p&lt;/i&gt;.&lt;/li&gt;&lt;li&gt;Bucket array sizes follow a power-of-two sequence 2&lt;i&gt;&lt;sup&gt;n&lt;/sup&gt;&lt;/i&gt;, and mapping takes
&lt;i&gt;n&lt;/i&gt; bits from &lt;i&gt;h&lt;/i&gt;. Typically it is the &lt;i&gt;n&lt;/i&gt; least significant bits that are used,
but in some cases, like when &lt;i&gt;h&lt;/i&gt; is postprocessed to improve its uniformity
via multiplication by a well-chosen constant &lt;i&gt;m&lt;/i&gt; (such as defined by
&lt;a href=&quot;https://en.wikipedia.org/wiki/Hash_function#Fibonacci_hashing&quot; rel=&quot;nofollow&quot;&gt;Fibonacci hashing&lt;/a&gt;),
it is best to take the &lt;i&gt;n&lt;/i&gt; &lt;i&gt;most&lt;/i&gt; significant bits, that is,
&lt;i&gt;h&lt;/i&gt; → (&lt;i&gt;h&lt;/i&gt; × &lt;i&gt;m&lt;/i&gt;)  &amp;gt;&amp;gt; (&lt;i&gt;N&lt;/i&gt; − &lt;i&gt;n&lt;/i&gt;), where &lt;i&gt;N&lt;/i&gt; is the bitwidth of &lt;code&gt;std::size_t&lt;/code&gt;
and &amp;gt;&amp;gt; is the usual C++ right shift operation.&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;We use the modulo by a prime approach because it produces very good spreading even if
hash values are not uniformly distributed. In modern CPUs, however, modulo is an expensive
operation involving integer division; compilers, on the other hand, know how to perform
modulo &lt;i&gt;by a constant&lt;/i&gt; much more efficiently, so one possible optimization is to keep a
table of pointers to functions &lt;i&gt;f&lt;/i&gt;&lt;sub&gt;&lt;i&gt;p&lt;/i&gt;&lt;/sub&gt; : &lt;i&gt;h&lt;/i&gt; →  &lt;i&gt;h&lt;/i&gt; mod &lt;i&gt;p&lt;/i&gt;. This technique
replaces expensive modulo calculation with a table jump plus a modulo-by-a-constant operation.&lt;/p&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;In Boost.Unordered 1.80, we have gone a step further.
&lt;a href=&quot;https://arxiv.org/abs/1902.01961&quot; rel=&quot;nofollow&quot;&gt;Daniel Lemire et al.&lt;/a&gt; show how to calculate
&lt;i&gt;h&lt;/i&gt; mod &lt;i&gt;p&lt;/i&gt; as an operation involving some shifts and multiplications by &lt;i&gt;p&lt;/i&gt; and
a pre-computed &lt;i&gt;c&lt;/i&gt; value acting as a sort of reciprocal of &lt;i&gt;p&lt;/i&gt;. We have used this work
to implement hash mapping as &lt;i&gt;h&lt;/i&gt; → fastmod(&lt;i&gt;h&lt;/i&gt;, &lt;i&gt;p&lt;/i&gt;, &lt;i&gt;c&lt;/i&gt;) (some details omitted).
Note that, even though fastmod is generally faster than modulo by a constant,
most performance gains actually come from the fact that we are eliminating the
table jump needed to select &lt;i&gt;f&lt;/i&gt;&lt;sub&gt;&lt;i&gt;p&lt;/i&gt;&lt;/sub&gt;, which prevented code inlining.&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot;&gt;Time and memory performance of Boost 1.80 &lt;code&gt;boost::unordered_map&lt;/code&gt;&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;We are providing some &lt;a href=&quot;https://www.boost.org/doc/libs/develop/libs/unordered/doc/html/unordered.html#benchmarks&quot; rel=&quot;nofollow&quot;&gt;benchmark results&lt;/a&gt;
of the &lt;code&gt;boost::unordered_map&lt;/code&gt; against libstdc++-v3, libc++ and Visual Studio standard library
for insertion, lookup and erasure scenarios. &lt;code&gt;boost::unordered_map&lt;/code&gt; is mostly
faster across the board, and in some cases significantly so. There are three factors
contributing to this performance advantage:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;the very reduced memory footprint improves cache utilization,&lt;/li&gt;&lt;li&gt;fast modulo is used,&lt;/li&gt;&lt;li&gt;the new layout incurs one less pointer indirection than libstdc++-v3 and libc++
to access the elements of a bucket.&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;As for memory consumption, let &lt;i&gt;N&lt;/i&gt; be the number of elements in a container with
&lt;i&gt;B&lt;/i&gt; buckets: the memory overheads (that is, memory allocated minus memory used
strictly for the elements themselves) of the different implementations on 64-bit
architectures are:&lt;/p&gt;
&lt;table style=&quot;margin-left: auto; margin-right: auto; text-align: left;&quot;&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Implementation&lt;/th&gt;
&lt;th align=&quot;center&quot;&gt;Memory overhead (bytes)&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;libstdc++-v3&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;16 &lt;i&gt;N&lt;/i&gt; + 8 &lt;i&gt;B&lt;/i&gt; (&lt;a href=&quot;https://gcc.gnu.org/onlinedocs/libstdc++/manual/unordered_associative.html#containers.unordered.cache&quot; rel=&quot;nofollow&quot;&gt;hash caching&lt;/a&gt;)&lt;br /&gt;8 &lt;i&gt;N&lt;/i&gt; + 8 &lt;i&gt;B&lt;/i&gt;  (no hash caching)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;libc++&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;16 &lt;i&gt;N&lt;/i&gt; + 8 &lt;i&gt;B&lt;/i&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Visual Studio (Dinkumware)&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;16 &lt;i&gt;N&lt;/i&gt; + 16 &lt;i&gt;B&lt;/i&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Boost.Unordered&lt;/td&gt;
&lt;td align=&quot;center&quot;&gt;8 &lt;i&gt;N&lt;/i&gt; + 8.5 &lt;i&gt;B&lt;/i&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;&amp;nbsp;&lt;/div&gt;&lt;h2 style=&quot;text-align: left;&quot;&gt;Which hash container to choose&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;Opting for closed-addressing (which, in the realm of C++, is almost
synonymous with using an implementation of &lt;code&gt;std::unordered_map&lt;/code&gt;) or choosing a
speed-oriented, open-addressing container is in practice not a clear-cut decision.
Some factors favoring one or the other option are listed:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;&lt;code&gt;std::unordered_map&lt;/code&gt;
&lt;ul&gt;&lt;li&gt;The code uses some specific parts of its API like node extraction, the bucket interface
or the ability to set the maximum load factor, which are generally not available
in open-addressing containers.&lt;/li&gt;&lt;li&gt;Pointer stability and/or non-moveability of values required (though some open-addressing alternatives
support these at the expense of reduced performance).&lt;/li&gt;&lt;li&gt;Constant-time iterator increment required.&lt;/li&gt;&lt;li&gt;Hash functions used are only mid-quality (open addressing requires that the hash
function have very good key-spreading properties).&lt;/li&gt;&lt;li&gt;Equivalent key support, ie. &lt;code&gt;unordered_multimap&lt;/code&gt;/&lt;code&gt;unordered_multiset&lt;/code&gt; required.
We do not know of any open-addressing container supporting equivalent keys.&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;&lt;li&gt;Open-addressing containers
&lt;ul&gt;&lt;li&gt;Performance is the main concern.&lt;/li&gt;&lt;li&gt;Existing code can be adapted to a basically more stringent API and more demanding requirements
on the element type (like moveability).&lt;/li&gt;&lt;li&gt;Hash functions are of good quality (or the default ones from the container provider are used).&lt;/li&gt;&lt;/ul&gt;
&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;If you decide to use &lt;code&gt;std::unordered_map&lt;/code&gt;, Boost.Unordered 1.80 now gives you the fastest,
fully-conformant implementation on the market.&lt;/p&gt;
&lt;h2 style=&quot;text-align: left;&quot;&gt;Next steps&lt;/h2&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;There are some further areas of improvement to &lt;code&gt;boost::unordered_map&lt;/code&gt; that we will
investigate post Boost 1.80:&lt;/p&gt;
&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Reduce the memory overhead of the new layout from 4 bits to 3 bits per bucket.&lt;/li&gt;&lt;li&gt;Speed up performance for equivalent key variants (&lt;code&gt;unordered_multimap&lt;/code&gt;/&lt;code&gt;unordered_multiset&lt;/code&gt;).&lt;/li&gt;&lt;/ul&gt;
&lt;p style=&quot;text-align: left;&quot;&gt;In parallel, we are working on the future &lt;code&gt;boost::unordered_flat_map&lt;/code&gt;, our proposal for
a top-speed, open-addressing container beyond the limitations imposed by &lt;code&gt;std::unordered_map&lt;/code&gt;
interface. Your feedback on our current and future work is much welcome.&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/5329302977245474327/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2022/06/advancing-state-of-art-for.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5329302977245474327'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5329302977245474327'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2022/06/advancing-state-of-art-for.html' title='Advancing the state of the art for &lt;code&gt;std::unordered_map&lt;/code&gt; implementations'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhDSQQoqH5c7LuFi_SzM70cCioPxDCUmtuUyb0aPkcjDCfMg_65498faqsJtxZ2mBlzpmdFwowHxXdTUzjqEtbK-fIeaZR9y26CXD8zXE4V89VJDUjZG9cRRhGyxrWiEKYa29qU78_zVa9wKD60tyGUIwImqKxkkYLXfjuio87uU3-fJwxXN9WuFuiH/s72-c/bucket-groups.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-8409773756729875423</id><published>2022-03-10T11:30:00.004+01:00</published><updated>2023-01-18T10:32:44.748+01:00</updated><title type='text'>Emulating template named arguments in C++20</title><content type='html'>&lt;p style=&quot;text-align: justify;&quot;&gt;&lt;code&gt;std::unordered_map&lt;/code&gt; is a highly configurable class template with five parameters:&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;
    class Key,
    class Value,
    class Hash = std::hash&amp;lt;Key&amp;gt;,
    class KeyEqual = std::equal_to&amp;lt;Key&amp;gt;,
    class Allocator = std::allocator&amp;lt; std::pair&amp;lt;const Key, Value&amp;gt; &amp;gt;
&amp;gt; class unordered_map;
&lt;/pre&gt;
&lt;p style=&quot;text-align: justify;&quot;&gt;Typical usage depends on default values for most of these parameters:&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;using my_map=std::unordered_map&amp;lt;int,std::string&amp;gt;;
&lt;/pre&gt;
&lt;p style=&quot;text-align: justify;&quot;&gt;but things get cumbersome when we want to specify one of the usually defaulted types:&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename T&amp;gt; class my_allocator{ ... };&lt;br /&gt;using my_map=std::unordered_map&amp;lt;&lt;br /&gt;  int, std::string,&lt;br /&gt;  std::hash&amp;lt;int&amp;gt;, std::equal_to&amp;lt;int&amp;gt;,&lt;br /&gt;  my_allocator&amp;lt; std::pair&amp;lt;const int, std::string&amp;gt; &amp;gt;&lt;br /&gt;&amp;gt;;
&lt;/pre&gt;
&lt;p&gt;In the example, we are forced to specify the hash and equality predicate with their default value types just to get to the allocator, which is the parameter we really wanted to specify. Ideally we would like to have a syntax like this:&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;// this is not actual C++&lt;br /&gt;using my_map = std::unordered_map&amp;lt;&lt;br /&gt;  Key=int, Value=std::string,&lt;br /&gt;  Allocator=my_allocator&amp;lt; std::pair&amp;lt;const int, std::string&amp;gt; &amp;gt;&lt;br /&gt;&amp;gt;;
&lt;/pre&gt;
&lt;p&gt;Turns out we can emulate this by resorting to &lt;a href=&quot;https://en.cppreference.com/w/cpp/language/aggregate_initialization#Designated_initializers&quot;&gt;&lt;i&gt;designated initializers&lt;/i&gt;&lt;/a&gt;, introduced in C++20:&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;&lt;br /&gt;  typename Key, typename Value,&lt;br /&gt;  typename Hash = std::hash&amp;lt;Key&amp;gt;,&lt;br /&gt;  typename Equal = std::equal_to&amp;lt;Key&amp;gt;,&lt;br /&gt;  typename Allocator = std::allocator&amp;lt; std::pair&amp;lt;const Key,Value&amp;gt; &amp;gt;&lt;br /&gt;&amp;gt;&lt;br /&gt;struct unordered_map_config&lt;br /&gt;{&lt;br /&gt;  Key       *key = nullptr;&lt;br /&gt;  Value     *value = nullptr;&lt;br /&gt;  Hash      *hash = nullptr;&lt;br /&gt;  Equal     *equal = nullptr;&lt;br /&gt;  Allocator *allocator = nullptr;&lt;br /&gt;&lt;br /&gt;  using type = std::unordered_map&amp;lt;Key,Value,Hash,Equal,Allocator&amp;gt;;&lt;br /&gt;};&lt;br /&gt;&lt;br /&gt;template&amp;lt;typename T&amp;gt;&lt;br /&gt;constexpr T *type = nullptr;&lt;br /&gt;&lt;br /&gt;template&amp;lt;unordered_map_config Cfg&amp;gt;&lt;br /&gt;using unordered_map = typename decltype(Cfg)::type;&lt;br /&gt;&lt;br /&gt;...&lt;br /&gt;&lt;br /&gt;using my_map = unordered_map&amp;lt;{&lt;br /&gt;  .key = type&amp;lt;int&amp;gt;, .value = type&amp;lt;std::string&amp;gt;,&lt;br /&gt;  .allocator = type&amp;lt; my_allocator&amp;lt; std::pair&amp;lt;const int, std::string &amp;gt; &amp;gt; &amp;gt;&lt;br /&gt;}&amp;gt;;
&lt;/pre&gt;
&lt;p&gt;The approach taken by the simulation is to use designated initializers to create an aggregate object consisting of dummy null pointers: the values of the pointers do not matter, but their types are captured via &lt;a href=&quot;https://en.cppreference.com/w/cpp/language/class_template_argument_deduction&quot;&gt;CTAD&lt;/a&gt; and used to synthesize the associated &lt;code&gt;std::unordered_map&lt;/code&gt; instantiation. Two more C++20 features this technique depends on are:&lt;/p&gt;&lt;ul style=&quot;text-align: left;&quot;&gt;&lt;li&gt;Non-type template parameters have been extended to accept &lt;a href=&quot;https://en.cppreference.com/w/cpp/named_req/LiteralType&quot;&gt;&lt;i&gt;literal types&lt;/i&gt;&lt;/a&gt; (which include aggregate types such as &lt;code&gt;unordered_map_config&lt;/code&gt;   instantiations).&lt;br /&gt;&lt;/li&gt;&lt;li&gt;The class template &lt;code&gt;unordered_map_config&lt;/code&gt;  can be specified as a non-type template parameter of &lt;code&gt;unordered_map&lt;/code&gt;. In C++17, we would have had to define &lt;code&gt;unordered_map&lt;/code&gt; as &lt;br /&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;auto Cfg&amp;gt;&lt;br /&gt;using unordered_map = typename decltype(Cfg)::type;
&lt;/pre&gt;which would force the user to explicit name &lt;code&gt;unordered_map_config&lt;/code&gt; in&lt;br /&gt;&lt;pre class=&quot;prettyprint&quot;&gt;using my_map = unordered_map&amp;lt;&lt;code&gt;unordered_map_config&lt;/code&gt;{...}&amp;gt;;&lt;/pre&gt;&lt;/li&gt;&lt;/ul&gt;&lt;p&gt;There is still the unavoidable noise of having to use the &lt;code&gt;type&lt;/code&gt; template alias since, of course, aggregate initialization is about values rather than types.&lt;/p&gt;&lt;p&gt;Another limitation of this simulation is that we cannot mix named and unnamed parameters:&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;// compiler error: either all initializer clauses should be designated&lt;br /&gt;// or none of them should be&lt;br /&gt;using my_map = unordered_map&amp;lt;{&lt;br /&gt;  type&amp;lt;int&amp;gt;, type&amp;lt;std::string&amp;gt;,&lt;br /&gt;  .allocator = type&amp;lt; my_allocator&amp;lt; std::pair&amp;lt;const int, std::string &amp;gt; &amp;gt; &amp;gt;&lt;br /&gt;}&amp;gt;;
&lt;/pre&gt;
&lt;p&gt;C++20 designated parameters are more restrictive than their C99 counterpart; some of the constraints (initializers cannot be specified out of order) are totally valid in the context of C++, but I personally fail to see why mixing named and unnamed parameters would pose any problem.&lt;br /&gt;&lt;/p&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/8409773756729875423/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2022/03/emulating-template-named-arguments-in.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8409773756729875423'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8409773756729875423'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2022/03/emulating-template-named-arguments-in.html' title='Emulating template named arguments in C++20'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-8276673301208999191</id><published>2022-01-17T14:34:00.001+01:00</published><updated>2022-01-17T14:57:36.137+01:00</updated><title type='text'>Start Wordle with TARES</title><content type='html'>&lt;p&gt;There have been some discussions on what the best first guess is for the game &lt;a href=&quot;https://www.powerlanguage.co.uk/wordle/&quot; target=&quot;_blank&quot;&gt;Wordle&lt;/a&gt;, but none, to the best of my knowledge, has used the following approach. After each guess, the game answers back with a matching result like these:&lt;br /&gt;&lt;/p&gt;&lt;p style=&quot;margin-left: 40px; text-align: left;&quot;&gt;&lt;span style=&quot;font-size: x-large;&quot;&gt;&lt;span style=&quot;color: #666666;&quot;&gt;■■■■■&lt;/span&gt;&lt;/span&gt; (all letters wrong),&lt;span style=&quot;font-size: x-large;&quot;&gt;&lt;span style=&quot;color: #666666;&quot;&gt;&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;/p&gt;&lt;p style=&quot;margin-left: 40px; text-align: left;&quot;&gt;&lt;span style=&quot;font-size: x-large;&quot;&gt;&lt;span style=&quot;color: #666666;&quot;&gt;■&lt;span style=&quot;color: #04ff00;&quot;&gt;■■&lt;/span&gt;&lt;span style=&quot;color: #bf9000;&quot;&gt;■&lt;/span&gt;■&lt;/span&gt;&lt;/span&gt; (two letters right, one mispositioned),&lt;/p&gt;&lt;p style=&quot;margin-left: 40px; text-align: left;&quot;&gt;&lt;span style=&quot;font-size: x-large;&quot;&gt;&lt;span style=&quot;color: #04ff00;&quot;&gt;■■■■■&lt;/span&gt;&lt;/span&gt; (all letters right).&lt;/p&gt;There are 3&lt;sup&gt;5&lt;/sup&gt;=243 possible answers. From an information-theoretic point of view, the word we are trying to guess is a random variable (selected from a predefined dictionary), and the information we are obtaining by submitting our query is measured by the &lt;a href=&quot;https://mathworld.wolfram.com/Entropy.html&quot; target=&quot;_blank&quot;&gt;entropy&lt;/a&gt; formula&lt;br /&gt;&lt;div&gt;&lt;p style=&quot;text-align: center;&quot;&gt;&lt;i&gt;H&lt;/i&gt;(guess) = &lt;span&gt;− &lt;/span&gt; ∑ &lt;i&gt;p&lt;sub&gt;i&lt;/sub&gt;&lt;/i&gt; log&lt;sub&gt;2&lt;/sub&gt; &lt;i&gt;p&lt;sub&gt;i&lt;/sub&gt;&lt;/i&gt; bits,&lt;br /&gt;&lt;/p&gt;&lt;p&gt;where &lt;i&gt;p&lt;sub&gt;i&lt;/sub&gt;&lt;/i&gt; is the probability that the game returns the &lt;i&gt;i&lt;/i&gt;-th answer (&lt;i&gt;i&lt;/i&gt; = 1, ... , 243) for our particular guess. So, the best first guess is the one for which we get the most information, that is, the associated entropy is maximum. Intuitively speaking, we are going for the guess that yields the most balanced partition of the dictionary words as grouped by their matching result: entropy is maximum when all &lt;i&gt;p&lt;sub&gt;i &lt;/sub&gt;&lt;/i&gt;are equal (this is impossible for our problem, but gives an upper bound on the attainable entropy of log&lt;sub&gt;2&lt;/sub&gt;(243) = 7.93 bits).&lt;br /&gt;&lt;/p&gt;&lt;p&gt;Let&#39;s compute then the best guesses. Wordle uses a dictionary of 2,315 entries which is unfortunately not disclosed; in its place we will resort to &lt;a href=&quot;https://www-cs-faculty.stanford.edu/%7Eknuth/sgb.html&quot; target=&quot;_blank&quot;&gt;Stanford GraphBase list&lt;/a&gt;. I wrote a trivial &lt;a href=&quot;https://github.com/joaquintides/bannalia/blob/master/wordle.cpp&quot; target=&quot;_blank&quot;&gt;C++17 program&lt;/a&gt; that goes through each of the 5,757 words of Stanford&#39;s list and computes its associated entropy as a first guess (see it &lt;a href=&quot;http://coliru.stacked-crooked.com/a/8867de8adef60b13&quot; target=&quot;_blank&quot;&gt;running online&lt;/a&gt;). The resulting top 10 best words, along with their entropies are:&lt;/p&gt;&lt;p style=&quot;text-align: center;&quot;&gt;TARES &amp;nbsp;&amp;nbsp; 6.20918&lt;br /&gt;RATES&amp;nbsp;&amp;nbsp;&amp;nbsp; 6.11622&lt;br /&gt;TALES &amp;nbsp;&amp;nbsp; 6.09823&lt;br /&gt;TEARS &amp;nbsp;&amp;nbsp; 6.05801&lt;br /&gt;NARES &amp;nbsp;&amp;nbsp; 6.01579&lt;br /&gt;TIRES&amp;nbsp;&amp;nbsp;&amp;nbsp; 6.01493&lt;br /&gt;REALS &amp;nbsp;&amp;nbsp; 6.00117&lt;br /&gt;DARES &amp;nbsp;&amp;nbsp; 5.99343&lt;br /&gt;LORES &amp;nbsp;&amp;nbsp; 5.99031&lt;br /&gt;TRIES&amp;nbsp;&amp;nbsp;&amp;nbsp; 5.98875&lt;br /&gt;&lt;/p&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;/div&gt;</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/8276673301208999191/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2022/01/start-wordle-with-tares.html#comment-form' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8276673301208999191'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8276673301208999191'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2022/01/start-wordle-with-tares.html' title='Start Wordle with TARES'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-5086479727356436454</id><published>2016-09-08T00:01:00.000+02:00</published><updated>2016-09-08T18:19:35.407+02:00</updated><title type='text'>Global warming as falling into the Sun</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
This summer in Spain has been so particularly hot that people came up with graphical jokes like this:&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKGoF-5KOCpt0D76PnwpvuHTd5FkpSr8QO1CfFOrKHyCzhUZDnkMfRkFjqubcfXVNb1pmmKPfylbIZ4Zf1lE8Iq51-bmbsuTkmRxiLW9IZn3TacO6YVspV2nA_SHlqOCntQXv_gySLwpk/s1600/c%25C3%25A1ceres.jpg&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;250&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKGoF-5KOCpt0D76PnwpvuHTd5FkpSr8QO1CfFOrKHyCzhUZDnkMfRkFjqubcfXVNb1pmmKPfylbIZ4Zf1lE8Iq51-bmbsuTkmRxiLW9IZn3TacO6YVspV2nA_SHlqOCntQXv_gySLwpk/s1600/c%25C3%25A1ceres.jpg&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
(&lt;a href=&quot;https://en.wikipedia.org/wiki/C%C3%A1ceres,_Spain&quot;&gt;Cáceres&lt;/a&gt; is my hometown; versions of this picture for many other Spanish populations swarm the net.) Pursuing this idea half-seriously, one can reason that an increase in global temperatures due to climate change might be journalistically equated with the Earth getting closer to the Sun and thus receiving more radiation, which analogy conjures up doomy visions of our planet falling into the blazing hell of the star: let us do the calculations.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;a href=&quot;https://en.wikipedia.org/wiki/Climate_sensitivity&quot;&gt;&lt;i&gt;Climate sensitivity&lt;/i&gt;&lt;/a&gt;, usually denoted by &lt;i&gt;λ&lt;/i&gt;, links changes in global surface temperature with variations of received radiative power&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
Δ&lt;i&gt;T&lt;/i&gt; = &lt;i&gt;λ&lt;/i&gt; Δ&lt;i&gt;W&lt;/i&gt;.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The mechanism by which radiative power changes (increased albedo, greenhouse effect) results in a different associated &lt;i&gt;λ&lt;/i&gt; parameter. For the case of power variations due to changes in solar activity, &lt;a href=&quot;http://www.calpoly.edu/~camp/Publications/Tung_etal_GRL_2008.pdf&quot;&gt;Tung et. al&lt;/a&gt; have calculated &lt;i&gt;λ&lt;/i&gt;&lt;sub&gt;&lt;i&gt;s&lt;/i&gt;&lt;/sub&gt; to be in the range of 0.69 to 0.97 K/(W/m&lt;sup&gt;2&lt;/sup&gt;) using data from observations of 11-year solar cycles, and estimate that the stationary sensitivity (i.e. if the change in power was permanent) would be 1.5 times higher, thus in the range of 1.03 to 1.45 K/(W/m&lt;sup&gt;2&lt;/sup&gt;).&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Now, the Earth is &lt;i&gt;D&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt; =  &lt;a href=&quot;https://en.wikipedia.org/wiki/Earth%27s_orbit&quot;&gt;1.496 × 10&lt;sup&gt;8&lt;/sup&gt; km&lt;/a&gt; away from the Sun, and receives an average radiation of &lt;i&gt;W&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt; = &lt;a href=&quot;https://en.wikipedia.org/wiki/Solar_irradiance#Earth&quot;&gt;1366 W/m&lt;sup&gt;2&lt;/sup&gt;&lt;/a&gt;. Assuming &lt;a href=&quot;https://en.wikipedia.org/wiki/Near_and_far_field&quot;&gt;far-field conditions&lt;/a&gt;, the radiative power received at the Earth as a function of the distance &lt;i&gt;D&lt;/i&gt; to the Sun is then&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;i&gt;W&lt;/i&gt; = &lt;i&gt;w&lt;/i&gt; / &lt;i&gt;D&lt;/i&gt;&lt;sup&gt;2&lt;/sup&gt;,&lt;br /&gt;
&lt;i&gt;w&lt;/i&gt; = 3.057 × 10&lt;sup&gt;25&lt;/sup&gt; W/sr,&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
which allows us to calculate Δ&lt;i&gt;T&lt;/i&gt; = &lt;i&gt;λ&lt;/i&gt;&lt;sub&gt;&lt;i&gt;s&lt;/i&gt;&lt;/sub&gt; Δ&lt;i&gt;W&lt;/i&gt; from Δ&lt;i&gt;D&lt;/i&gt; = &lt;i&gt;D&lt;/i&gt;&lt;sub&gt;0&lt;/sub&gt; − &lt;i&gt;D&lt;/i&gt;, as shown in the graph for the minimum and maximum estimated values of &lt;i&gt;λ&lt;/i&gt;&lt;sub&gt;&lt;i&gt;s&lt;/i&gt;&lt;/sub&gt;. Although this cannot be checked visually, the lines are not straight but include a negligible (in these distance ranges) quadratic component.&lt;i&gt; &lt;/i&gt;&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVz_wMqnKG26xlDie5T-b7shYk1KgQmFWsHj7NFcemg2qbgksHC7lHjNXCbOgL9IBQrwiDvwe8gUwobQiFl2cgEETGLUrT1KXUy_abJ-i7ys3yNpXKWbvwOTyqkEE42gnnKqPWy33keMU/s1600/deltat_deltad.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;296&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgVz_wMqnKG26xlDie5T-b7shYk1KgQmFWsHj7NFcemg2qbgksHC7lHjNXCbOgL9IBQrwiDvwe8gUwobQiFl2cgEETGLUrT1KXUy_abJ-i7ys3yNpXKWbvwOTyqkEE42gnnKqPWy33keMU/s1600/deltat_deltad.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
So, the estimated increase of &lt;a href=&quot;https://en.wikipedia.org/wiki/Global_warming#Observed_temperature_changes&quot;&gt;0.75 °C in global temperature during the 20th century&lt;/a&gt; is equivalent to pushing the Earth between 30 and 40 thousand kilometers towards the Sun. Each extra °C brings us 38,000-54,000 km closer to the star. For those stuck with &lt;a href=&quot;https://en.wikipedia.org/wiki/United_States_customary_units&quot;&gt;USCS&lt;/a&gt;, each °F is equivalent to 13,000-18,000 miles.&lt;br /&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
As an alarmist meme, the figure works poorly since no amount of global warming will translate to anything resembling &quot;falling into the Sun&quot;: relative changes in distance measure in the &lt;a href=&quot;https://en.wikipedia.org/wiki/Basis_point#Permyriad&quot;&gt;permyriads&lt;/a&gt;. And, yes, the joke at the beginning of this article is definitely a gross exaggeration.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/5086479727356436454/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2016/09/global-warming-as-falling-into-sun.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5086479727356436454'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5086479727356436454'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2016/09/global-warming-as-falling-into-sun.html' title='Global warming as falling into the Sun'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhKGoF-5KOCpt0D76PnwpvuHTd5FkpSr8QO1CfFOrKHyCzhUZDnkMfRkFjqubcfXVNb1pmmKPfylbIZ4Zf1lE8Iq51-bmbsuTkmRxiLW9IZn3TacO6YVspV2nA_SHlqOCntQXv_gySLwpk/s72-c/c%25C3%25A1ceres.jpg" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-4186541128035191485</id><published>2016-09-06T21:24:00.000+02:00</published><updated>2016-09-07T12:44:59.047+02:00</updated><title type='text'>Compile-time checking the existence of a class template</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
(Updated after a suggestion from &lt;a href=&quot;https://www.reddit.com/r/cpp/comments/51gxb3/compiletime_checking_the_existence_of_a_class/d7bvgij&quot;&gt;bluescarni&lt;/a&gt;.) I recently had to use C++14&#39;s &lt;a href=&quot;http://en.cppreference.com/w/cpp/types/is_final&quot;&gt;&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::is_final&lt;/span&gt;&lt;/a&gt; but wanted to downgrade to &lt;a href=&quot;http://www.boost.org/libs/type_traits/doc/html/boost_typetraits/reference/is_final.html&quot;&gt;&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;boost::is_final&lt;/span&gt;&lt;/a&gt; if the former was not available. Trusting &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;__cplusplus&lt;/span&gt; implies overlooking the fact that compilers never provide 100% support for any version of the language, and&amp;nbsp; &lt;a href=&quot;http://www.boost.org/libs/config/doc/html/index.html&quot;&gt;Boost.Config&lt;/a&gt; is usually helpful with these matters, but, as of this writing, it does not provide any macro to check for the existence of &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::is_final&lt;/span&gt;. It turns out the matter can be investigated with some compile-time manipulations. We first set up some helping machinery in a namespace of our own:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;namespace std_is_final_exists_detail{
    
template&amp;lt;typename&amp;gt; struct is_final{};

struct helper{};

}&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std_is_final_exists_detail::is_final&lt;/span&gt; has the same signature as the (possibly existing) &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::is_final&lt;/span&gt; homonym, but need not implement any of the functionality since it will be used for detection only. The class &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;helper&lt;/span&gt; is now used to write code directly into &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;namespace std&lt;/span&gt;, as the rules of the language allow (and, in some cases, encourage) us to specialize standard class templates for our own types, like for instance with &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::hash&lt;/span&gt;:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;namespace std{

template&amp;lt;&amp;gt;
struct hash&amp;lt;std_is_final_exists_detail::helper&amp;gt;
{
  std::size_t operator()(
    const std_is_final_exists_detail::helper&amp;amp;)const{return 0;}
      
  static constexpr bool check()
  {
    using helper=std_is_final_exists_detail::helper;
    using namespace std_is_final_exists_detail;
    
    return
      !std::is_same&amp;lt;
        is_final&amp;lt;helper&amp;gt;,
        std_is_final_exists_detail::is_final&amp;lt;helper&amp;gt;&amp;gt;::value;
  }
};

}
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;operator()&lt;/span&gt; is defined to nominally comply with the expected semantics of &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::hash&lt;/span&gt; specialization; it is in &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;check&lt;/span&gt; that the interesting work happens. By a non-obvious but totally sensible C++ rule, the directive&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;using namespace std_is_final_exists_detail;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
makes all the symbols of the namespace (including &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;is_final&lt;/span&gt;) visible as if they were declared in the &lt;i&gt;nearest namespace containing both &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std_is_final_exists_detail&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std&lt;/span&gt;&lt;/i&gt;, that is, at global namespace level. This means that the unqualified use of &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;is_final&lt;/span&gt; in&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;!std::is_same&amp;lt;
  is_final&amp;lt;helper&amp;gt;,...
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
resolves to &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::is_final&lt;/span&gt; if it exists (as it is within namespace &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std&lt;/span&gt;, i.e. closer than the global level), and to &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std_is_final_exists_detail::is_final&lt;/span&gt; otherwise. We can wrap everything up in a utility class:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;using std_is_final_exists=std::integral_constant&amp;lt;
  bool,
  std::hash&amp;lt;std_is_final_exists_detail::helper&amp;gt;::check()
&amp;gt;;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
and check with a program&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;#include &amp;lt;iostream&amp;gt;

int main()
{
  std::cout&amp;lt;&amp;lt;&quot;std_is_final_exists: &quot;
           &amp;lt;&amp;lt;std_is_final_exists::value&amp;lt;&amp;lt;&quot;\n&quot;;
}
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
that dutifully ouputs&lt;/div&gt;
&lt;pre&gt;std_is_final_exists: 0
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
with &lt;a href=&quot;http://coliru.stacked-crooked.com/a/697c9c7ead1c4711&quot;&gt;GCC in &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;-std=c+11&lt;/span&gt;&lt;/a&gt; mode and&lt;/div&gt;
&lt;pre&gt;std_is_final_exists: 1
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
when with &lt;a href=&quot;http://coliru.stacked-crooked.com/a/22416d0e7347a51c&quot;&gt;&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;-std=c+14&lt;/span&gt;&lt;/a&gt;. Clang and Visual Studio also handle this code properly.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
(Updated Sep 7, 2016.) The same technique can be used to walk the last mile and implement an &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;is_final&lt;/span&gt; type trait class relying on &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::final&lt;/span&gt; but falling back to &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;boost::is_final&lt;/span&gt; if the former is not present. I&#39;ve slightly changed naming and used &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::is_void&lt;/span&gt; for the specialization trick as it involves a little less typing.&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;#include &amp;lt;boost/type_traits/is_final.hpp&amp;gt;
#include &amp;lt;type_traits&amp;gt;

namespace my_lib{
namespace is_final_fallback{

template&amp;lt;typename T&amp;gt; using is_final=boost::is_final&amp;lt;T&amp;gt;;

struct hook{};

}}

namespace std{

template&amp;lt;&amp;gt;
struct is_void&amp;lt;::my_lib::is_final_fallback::hook&amp;gt;:
  std::false_type
{      
  template&amp;lt;typename T&amp;gt;
  static constexpr bool is_final_f()
  {
    using namespace ::my_lib::is_final_fallback;
    return is_final&amp;lt;T&amp;gt;::value;
  }
};

} /* namespace std */

namespace my_lib{

template&amp;lt;typename T&amp;gt;
struct is_final:std::integral_constant&amp;lt;
  bool,
  std::is_void&amp;lt;is_final_fallback::hook&amp;gt;::template is_final_f&amp;lt;T&amp;gt;()
&amp;gt;{};

} /* namespace mylib */
&lt;/pre&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/4186541128035191485/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2016/09/compile-time-checking-existence-of.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/4186541128035191485'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/4186541128035191485'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2016/09/compile-time-checking-existence-of.html' title='Compile-time checking the existence of a class template'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-4027023004490328417</id><published>2016-07-28T13:17:00.000+02:00</published><updated>2018-01-26T10:29:42.481+01:00</updated><title type='text'>Passing capturing C++ lambda functions as function pointers</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
Suppose we have a function accepting a C-style callback function like this:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;void do_something(void (*callback)())
{
  ...
  callback();
}
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
As &lt;i&gt;captureless&lt;/i&gt; &lt;a href=&quot;http://en.cppreference.com/w/cpp/language/lambda&quot;&gt;C++ lambda functions&lt;/a&gt; can be cast to regular function pointers, the following works as expected:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;auto callback=[](){std::cout&amp;lt;&amp;lt;&quot;callback called\n&quot;;};
do_something(callback);

&lt;span class=&quot;nocode&quot;&gt;&lt;b&gt;output: callback called&lt;/b&gt;&lt;/span&gt;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Unfortunately , if our callback code captures some variable from the context, we are out of luck&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;int num_callbacks=0;
···
auto callback=[&amp;amp;](){
  std::cout&amp;lt;&amp;lt;&quot;callback called &quot;&amp;lt;&amp;lt;++num_callbacks&amp;lt;&amp;lt;&quot; times \n&quot;;
};
do_something(callback);

&lt;span class=&quot;nocode&quot;&gt;&lt;b&gt;error: cannot convert &#39;main()::&amp;lt;lambda&amp;gt;&#39; to &#39;void (*)()&#39;&lt;/b&gt;&lt;/span&gt;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
because capturing lambda functions create a &lt;i&gt;closure&lt;/i&gt; of the used context that needs to be carried around to the point of invocation. If we are allowed to modify &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;do_something&lt;/span&gt; we can easily circumvent the problem by accepting a more powerful &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::function&lt;/span&gt;-based callback: &lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;void do_something(std::function&amp;lt;void()&amp;gt; callback)
{
  ...
  callback();
}

int num_callbacks=0;
...
auto callback=[&amp;amp;](){
  std::cout&amp;lt;&amp;lt;&quot;callback called &quot;&amp;lt;&amp;lt;++num_callbacks&amp;lt;&amp;lt;&quot; times \n&quot;;
};
do_something(callback);

&lt;span class=&quot;nocode&quot;&gt;&lt;b&gt;output: callback called 1 times&lt;/b&gt;&lt;/span&gt;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
but we want to explore the challenge when this is not available (maybe because &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;do_something&lt;/span&gt; is legacy C code, or because we do not want to incur the runtime penalty associated with &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::function&lt;/span&gt;&#39;s usage of dynamic memory). Typically, C-style callback APIs accept an additional callback argument through a type-erased &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;void*&lt;/span&gt;:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;void do_something(void(*callback)(void*),void* callback_arg)
{
  ...
  callback(callback_arg);
}
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
and this is actually the only bit we need to force our capturing lambda function through &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;do_something&lt;/span&gt;. The gist of the trick is passing the lambda function as the callback argument and providing a captureless &lt;a href=&quot;https://en.wikipedia.org/wiki/Thunk&quot;&gt;&lt;i&gt;thunk&lt;/i&gt;&lt;/a&gt; as the callback function pointer:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;int num_callbacks=0;
...
auto callback=[&amp;amp;](){
  std::cout&amp;lt;&amp;lt;&quot;callback called &quot;&amp;lt;&amp;lt;++num_callbacks&amp;lt;&amp;lt;&quot; times \n&quot;;
};
auto thunk=[](void* arg){ // note thunk is captureless
  (*static_cast&amp;lt;decltype(callback)*&amp;gt;(arg))();
};
do_something(thunk,&amp;amp;callback);

&lt;span class=&quot;nocode&quot;&gt;&lt;b&gt;output: callback called 1 times&lt;/b&gt;&lt;/span&gt;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Note that we are not using dynamic memory nor doing any extra copying of the captured data, since &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;callback&lt;/span&gt; is accessed in the point of invocation through a pointer; so, this technique can be advantageous even if modern &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::function&lt;/span&gt;s could be used instead. The caveat is that the user code must make sure that captured data is alive when the callback is invoked (which is not the case when execution happens after scope exit if, for instance, it is carried out in a different thread).&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;b&gt;Postcript&lt;/b&gt; &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;a href=&quot;https://www.reddit.com/r/cpp/comments/4v06wu/passing_capturing_c_lambda_functions_as_function/d5uce2g&quot;&gt;Tcbrindle poses the issue&lt;/a&gt; of lambda functions casting to function pointers with C++ linkage, where C linkage may be needed. Although this is rarely a problem in practice, it can be solved through another layer of indirection:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;extern &quot;C&quot; void do_something(
  void(*callback)(void*),void* callback_arg)
{
  ...
  callback(callback_arg);
}

...

using callback_pair=std::pair&amp;lt;void(*)(void*),void*&amp;gt;;

extern &quot;C&quot; void call_thunk(void * arg)
{
  callback_pair* p=static_cast&amp;lt;callback_pair*&amp;gt;(arg);
  p-&amp;gt;first(p-&amp;gt;second);
}
...
int num_callbacks=0;
...
auto callback=[&amp;amp;](){
  std::cout&amp;lt;&amp;lt;&quot;callback called &quot;&amp;lt;&amp;lt;++num_callbacks&amp;lt;&amp;lt;&quot; times \n&quot;;
};
auto thunk=[](void* arg){ // note thunk is captureless
  (*static_cast&amp;lt;decltype(callback)*&amp;gt;(arg))();
};
callback_pair p{thunk,&amp;amp;callback};
do_something(call_thunk,&amp;amp;p);

&lt;span class=&quot;nocode&quot;&gt;&lt;b&gt;output: callback called 1 times&lt;/b&gt;&lt;/span&gt;
&lt;/pre&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/4027023004490328417/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2016/07/passing-capturing-c-lambda-functions-as.html#comment-form' title='8 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/4027023004490328417'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/4027023004490328417'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2016/07/passing-capturing-c-lambda-functions-as.html' title='Passing capturing C++ lambda functions as function pointers'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>8</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-4097669839203364290</id><published>2016-02-21T20:21:00.000+01:00</published><updated>2016-02-22T21:26:31.615+01:00</updated><title type='text'>A formal definition of mutation independence</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
Louis Dionne poses the problem of &lt;a href=&quot;http://ldionne.com//2016/02/17/a-tentative-notion-of-move-independence/&quot;&gt;&lt;i&gt;move independence&lt;/i&gt;&lt;/a&gt; in the context of C++, that is, under which conditions a sequence of operations&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;f(std::move(x));
g(std::move(x));
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
is sound in the sense that the first does not interfere with the second. We give here a functional definition for this property that can be applied to the case Louis discusses.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Let &lt;i&gt;X&lt;/i&gt; be some type and functions &lt;i&gt;f&lt;/i&gt;: &lt;i&gt;X&lt;/i&gt; &lt;span class=&quot;st&quot; data-hveid=&quot;49&quot;&gt;→ &lt;i&gt;T&lt;/i&gt;&lt;/span&gt;&lt;span class=&quot;_Tgc&quot;&gt;×&lt;i&gt;X&lt;/i&gt; and &lt;i&gt;g&lt;/i&gt;&lt;/span&gt;&lt;span class=&quot;_Tgc&quot;&gt;: &lt;i&gt;X&lt;/i&gt; &lt;span class=&quot;st&quot; data-hveid=&quot;49&quot;&gt;→ &lt;i&gt;Q&lt;/i&gt;&lt;/span&gt;&lt;span class=&quot;_Tgc&quot;&gt;×&lt;i&gt;X&lt;/i&gt;&lt;/span&gt;. The impurity of a non-functional construct in an imperative language such as C++ is captured in this functional setting by the fact that these functions return, besides the output value itself, a new, possibly changed, value of &lt;i&gt;X&lt;/i&gt;. We denote by &lt;i&gt;f&lt;sub&gt;T&lt;/sub&gt;&lt;/i&gt; and &lt;i&gt;f&lt;sub&gt;X&lt;/sub&gt;&lt;/i&gt; the projection of &lt;i&gt;f&lt;/i&gt;&amp;nbsp;onto &lt;i&gt;T&lt;/i&gt; and &lt;i&gt;X&lt;/i&gt;, respectively, and similarly for &lt;i&gt;g&lt;/i&gt;. We say that &lt;i&gt;f does not affect g&lt;/i&gt; if &lt;/span&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&amp;nbsp;&lt;i&gt;g&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;Q&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;) = &lt;i&gt;g&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;Q&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;f&lt;sub&gt;X&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;)) ∀&lt;i&gt;x&lt;/i&gt;∈&lt;i&gt;X&lt;/i&gt;.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
If we define the equivalence relationship &lt;span class=&quot;_Tgc&quot;&gt;~&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt; in &lt;i&gt;X&lt;/i&gt; as&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;i&gt;x&lt;/i&gt; &lt;span class=&quot;_Tgc&quot;&gt;~&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt; &lt;i&gt;y&lt;/i&gt; iff &lt;i&gt;g&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;Q&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;) = &lt;i&gt;g&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;Q&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;y&lt;/i&gt;),&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
then &lt;i&gt;f&lt;/i&gt; does not affect &lt;i&gt;g&lt;/i&gt; iff&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;f&lt;sub&gt;X&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;) &lt;span class=&quot;_Tgc&quot;&gt;~&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt; &lt;i&gt;x&lt;/i&gt; ∀&lt;i&gt;x&lt;/i&gt;∈&lt;i&gt;X&lt;/i&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
or&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;f&lt;sub&gt;X&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;([&lt;i&gt;x&lt;/i&gt;]&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;) ⊆ [&lt;i&gt;x&lt;/i&gt;]&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt; ∀&lt;i&gt;x&lt;/i&gt;∈&lt;i&gt;X&lt;/i&gt;,&lt;/div&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
where [&lt;i&gt;x&lt;/i&gt;]&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt; is the equivalence class of &lt;i&gt;x&lt;/i&gt; under &lt;span class=&quot;_Tgc&quot;&gt;~&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
We say that &lt;i&gt;f&lt;/i&gt; and &lt;i&gt;g&lt;/i&gt; are &lt;i&gt;mutation-independent&lt;/i&gt; if &lt;i&gt;f&lt;/i&gt; does not affect &lt;i&gt;g&lt;/i&gt; and &lt;i&gt;g&lt;/i&gt; does not affect &lt;i&gt;f&lt;/i&gt;, that is,&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;f&lt;sub&gt;X&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;([&lt;i&gt;x&lt;/i&gt;]&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;) ⊆ [&lt;i&gt;x&lt;/i&gt;]&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;g&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt; and &lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;g&lt;sub&gt;X&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;([&lt;i&gt;x&lt;/i&gt;]&lt;sub&gt;&lt;i&gt;f&lt;/i&gt;&lt;/sub&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;) ⊆ [&lt;i&gt;x&lt;/i&gt;]&lt;sub&gt;&lt;i&gt;f&lt;/i&gt;&lt;/sub&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt; ∀&lt;i&gt;x&lt;/i&gt;∈&lt;i&gt;X,&lt;/i&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The following considers the case of &lt;i&gt;f&lt;/i&gt; and &lt;i&gt;g&lt;/i&gt; acting on separate components of a tuple: suppose that &lt;i&gt;X&lt;/i&gt; = &lt;i&gt;X&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;×&lt;i&gt;X&lt;/i&gt;&lt;/span&gt;&lt;sub&gt;&lt;i&gt;2&lt;/i&gt;&lt;/sub&gt; and &lt;i&gt;f&lt;/i&gt; and &lt;i&gt;g&lt;/i&gt; depend on and mutate&amp;nbsp;&lt;i&gt;X&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt; and &lt;i&gt;X&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt; alone, respectively, or put more formally:&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;f&lt;sub&gt;T&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt;,&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt;) = &lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;f&lt;sub&gt;T&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt;,&lt;i&gt;x&#39;&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt;),&lt;br /&gt;
&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;f&lt;/i&gt;&lt;sub&gt;&lt;i&gt;X&lt;/i&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/sub&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt;,&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt;) =&amp;nbsp;&lt;i&gt;x&lt;/i&gt;&lt;sub&gt;2&lt;/sub&gt;,&lt;br /&gt;
&lt;i&gt;g&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;Q&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt;,&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt;) = &lt;i&gt;g&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;&lt;sub&gt;Q&lt;/sub&gt;&lt;/i&gt;&lt;/span&gt;(&lt;i&gt;x&#39;&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt;,&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt;),&lt;br /&gt;
&lt;span class=&quot;_Tgc&quot;&gt;&lt;i&gt;g&lt;/i&gt;&lt;sub&gt;&lt;i&gt;X&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/sub&gt;&lt;/span&gt;(&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt;,&lt;i&gt;x&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt;) = &lt;i&gt;&lt;i&gt;x&lt;/i&gt;&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt; &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
for all &lt;i&gt;&lt;i&gt;x&lt;/i&gt;&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt;, &lt;i&gt;&lt;i&gt;x&#39;&lt;/i&gt;&lt;/i&gt;&lt;sub&gt;1&lt;/sub&gt; ∈ &lt;i&gt;X&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;1&lt;/sub&gt;&lt;/span&gt;, &lt;i&gt;&lt;i&gt;x&lt;/i&gt;&lt;/i&gt;&lt;sub&gt;2&lt;/sub&gt;, &lt;i&gt;&lt;i&gt;x&#39;&lt;/i&gt;&lt;/i&gt;&lt;sub&gt;2&lt;/sub&gt; ∈ &lt;i&gt;X&lt;/i&gt;&lt;span class=&quot;_Tgc&quot;&gt;&lt;sub&gt;2&lt;/sub&gt;&lt;/span&gt;. Then&amp;nbsp; &lt;i&gt;f&lt;/i&gt; and &lt;i&gt;g&lt;/i&gt; are mutation-independent (proof trivial). Getting back to C++, given a tuple &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;x&lt;/span&gt;, two operations of the form:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;f(std::get&amp;lt;i&amp;gt;(std::move(x)));
g(std::get&amp;lt;j&amp;gt;(std::move(x)));
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
are mutation-independent if &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;i!=j&lt;/span&gt;; this can be extended to the case where &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;f&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;g&lt;/span&gt; read from (but not write to) any component of &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;x&lt;/span&gt; except the &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;j&lt;/span&gt;-th and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;i&lt;/span&gt;-th, respectively.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/4097669839203364290/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2016/02/a-formal-definition-of-mutation.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/4097669839203364290'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/4097669839203364290'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2016/02/a-formal-definition-of-mutation.html' title='A formal definition of mutation independence'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-1760849184030187902</id><published>2016-01-11T22:58:00.002+01:00</published><updated>2016-01-13T11:47:38.515+01:00</updated><title type='text'>(Oil+tax)-free Spanish gas prices 2014-15</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
We use the data gathered at our hysteresis analysis of Spanish gas prices for &lt;a href=&quot;http://bannalia.blogspot.com/2015/01/gas-price-hysteresis-spain-2014.html&quot;&gt;2014&lt;/a&gt; and &lt;a href=&quot;http://bannalia.blogspot.com/2016/01/gas-price-hysteresis-spain-2015.html&quot;&gt;2015&lt;/a&gt; to gain further insight on their dynamics. This is a simple breakdown of gas (or gasoil) price:&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
Price = oil cost + other costs + taxes + margin.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
A barrel of crude oil is refined into several final products totalling approximately the same amount of volume, that is, it takes roughly one liter of crude oil to produce one liter of gas (or gasoil). The simplest allocation model is to use market Brent prices as the oil cost for fuel production (we will see more realistic models later). If we eliminate taxes and oil cost, what remains in the fuel price is other costs plus margin. We plot this number for 95 octane gas and gasoil compared with Brent oil price, all in c€/l, for the period 2014-2015:&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW-J7tZMD2YCiUSqzzdRYKR1ANSw_GhV9taYr0KZLODTOVBYnXXmDRz6Hz4YbAzhK2rUwO5lV5MEBfMc9hkePpknwp1dq0RIL95XO0PA-Ak9v-4kDSv-wZLH2krFEJfHvOS8TvXIRgplM/s1600/oil_tax_free_prices.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;265&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW-J7tZMD2YCiUSqzzdRYKR1ANSw_GhV9taYr0KZLODTOVBYnXXmDRz6Hz4YbAzhK2rUwO5lV5MEBfMc9hkePpknwp1dq0RIL95XO0PA-Ak9v-4kDSv-wZLH2krFEJfHvOS8TvXIRgplM/s1600/oil_tax_free_prices.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;(Oil+tax)-free fuel price, simple cost allocation model [c€/l]&lt;br /&gt;
Brent oil cost [c€/l]&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
When we factor out crude oil cost, the remaning parts of the price increase moderately (~25% for gasoline, ~15% for gas). In a scenario of oil price reduction, oil direct costs as a percentage of tax-free fuel prices have consequently dropped from 70% to 50%:&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzDhbb7fqZV7AAENClAPQFapvynIyB-2YxVGPKiRrQMjiqZF7rHLopYZDueDQ-LkhyphenhyphenI6lmARPfGvM1B0xAWji4iOe_mWChZ8indnNhbEFixi9pEXEBL1GJu1yC0Kq4eEXhx0WN4syHZUA/s1600/oil_cost_contribution.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;265&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgzDhbb7fqZV7AAENClAPQFapvynIyB-2YxVGPKiRrQMjiqZF7rHLopYZDueDQ-LkhyphenhyphenI6lmARPfGvM1B0xAWji4iOe_mWChZ8indnNhbEFixi9pEXEBL1GJu1yC0Kq4eEXhx0WN4syHZUA/s1600/oil_cost_contribution.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Oil direct cost / tax-free fuel price,&lt;/b&gt;&lt;b&gt;simple cost allocation model&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;b&gt;Value-based cost allocation&lt;/b&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Crude oil is refined into several final products from high-quality fuel to asphalt, plastic etc. The &lt;a href=&quot;http://www.eia.gov/&quot;&gt;EIA&lt;/a&gt; provides typical &lt;a href=&quot;http://www.eia.gov/dnav/pet/pet_pnp_pct_dc_nus_pct_a.htm&quot;&gt;yield data for US refineries&lt;/a&gt; that we can use as a reasonable approximation to the Spanish case. The volume breakdown we are interested in is roughly:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Gas: 45%&lt;/li&gt;
&lt;li&gt;Gasoil: 30%&lt;/li&gt;
&lt;li&gt;Other products: 37% &lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
(Note that the sum is greater than 100% because additional components are mixed in the process). Now, as these products have very different prices in the market, it is natural to allocate oil costs proportionally to end-user value:&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
price&lt;sub&gt;total&lt;/sub&gt; =&amp;nbsp;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;/span&gt;45% price&lt;sub&gt;gasoline&lt;/sub&gt; + 30% price&lt;sub&gt;gasoil&lt;/sub&gt; + 37% price&lt;sub&gt;other&lt;/sub&gt; ,&lt;br /&gt;
cost&lt;sub&gt;gasoline&lt;/sub&gt; = cost&lt;sub&gt;oil&lt;/sub&gt; × price&lt;sub&gt;gasoline&lt;/sub&gt; / price&lt;sub&gt;total&lt;/sub&gt; ,&lt;br /&gt;
cost&lt;sub&gt;gas&lt;/sub&gt; = cost&lt;sub&gt;oil&lt;/sub&gt; × price&lt;sub&gt;gas&lt;/sub&gt; / price&lt;sub&gt;total&lt;/sub&gt;&lt;span style=&quot;font-size: large;&quot;&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
(prices without taxes). Since it is difficult to obtain accurate data on prices for the remaining products, we consider two conventional scenarios where these products are valued at 50% and 25% of the average fuel price, respectively:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;A: price&lt;sub&gt;other&lt;/sub&gt; = 50% (price&lt;sub&gt;gasoline&lt;/sub&gt; + price&lt;sub&gt;gasoil&lt;/sub&gt;)/2&lt;/li&gt;
&lt;li&gt;B: price&lt;sub&gt;other&lt;/sub&gt; = 25% (price&lt;sub&gt;gasoline&lt;/sub&gt; + price&lt;sub&gt;gasoil&lt;/sub&gt;)/2&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The figure depicts resulting prices without oil costs or taxes (i.e. other costs plus margin):&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5Vh9hkbHtpNcDFHF5paeA_GRoMUsCrf5XKbVkAKlr_roaUI_vxCwbJkSr-cEuaidxyco-wHFQV1nyTWZ57GN6pGHm6EsU2l2zVa86dLl_vy81d2zAUxteJwdr10r1vh8icFfq_9dktfU/s1600/oil_tax_free_prices_value_cost_allocation.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;260&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5Vh9hkbHtpNcDFHF5paeA_GRoMUsCrf5XKbVkAKlr_roaUI_vxCwbJkSr-cEuaidxyco-wHFQV1nyTWZ57GN6pGHm6EsU2l2zVa86dLl_vy81d2zAUxteJwdr10r1vh8icFfq_9dktfU/s1600/oil_tax_free_prices_value_cost_allocation.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;(Oil+tax)-free fuel price, value-based cost allocation [c€/l]&lt;br /&gt;
Brent oil cost [c€/l]&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Unlike with our previous, naïve allocation model, here we see, both in scenarios A and B, that margins for gasoline and gas match very precisely almost all the time: this can be seen as further indication that value-based cost allocation is indeed the model used by gas companies themselves. Visual inspection reveals two insights:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Short-term, margin fluctuations are countercyclical to oil price. This might be due to an effort from companies to stabilize prices.&lt;/li&gt;
&lt;li&gt;In the two-year period studied, margins grow &lt;i&gt;very much&lt;/i&gt;, around 30% for scenario A and 60% for scenario B. This trend has been somewhat corrected in the second half of 2015, though.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The percentual contribution of oil costs to fuel prices (which is by virtue of the cost allocation model exactly the same for gasoline and gas) drops in 2014-15 from 75% to 55% (scenario A) and from 85% to 60% (scenario B).&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj1GVV0o5P2G2Y2AzaQ1PU3d9Wn0mu9Xh_DQvbmuqoDWRJ2-KANgxphsvQxIejuzZCmOKJ_rDRusQC1BE-8z0JmPfw9Ql8ZsEDrG302-YA11aMX2eGH_HLQvPtlezQRqwNeQG-jooPZkY/s1600/oil_cost_contribution_value_cost_allocation.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;256&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhj1GVV0o5P2G2Y2AzaQ1PU3d9Wn0mu9Xh_DQvbmuqoDWRJ2-KANgxphsvQxIejuzZCmOKJ_rDRusQC1BE-8z0JmPfw9Ql8ZsEDrG302-YA11aMX2eGH_HLQvPtlezQRqwNeQG-jooPZkY/s1600/oil_cost_contribution_value_cost_allocation.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Oil direct cost / tax-free fuel price, &lt;/b&gt;&lt;b&gt;value-based cost allocation&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/1760849184030187902/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2016/01/oiltax-free-spanish-gas-prices-2014-15.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/1760849184030187902'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/1760849184030187902'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2016/01/oiltax-free-spanish-gas-prices-2014-15.html' title='(Oil+tax)-free Spanish gas prices 2014-15'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgW-J7tZMD2YCiUSqzzdRYKR1ANSw_GhV9taYr0KZLODTOVBYnXXmDRz6Hz4YbAzhK2rUwO5lV5MEBfMc9hkePpknwp1dq0RIL95XO0PA-Ak9v-4kDSv-wZLH2krFEJfHvOS8TvXIRgplM/s72-c/oil_tax_free_prices.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-5955401195404938553</id><published>2016-01-11T20:53:00.000+01:00</published><updated>2016-01-11T20:53:58.985+01:00</updated><title type='text'>Gas price hysteresis, Spain 2015</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
We begin the new year redoing our &lt;a href=&quot;http://bannalia.blogspot.com/2015/01/gas-price-hysteresis-spain-2014.html&quot;&gt;hysteresis analysis&lt;/a&gt; for Spanish gas prices with data from 2015, obtained from the usual sources:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Retail gasoline and gasoil prices from the &lt;a href=&quot;http://ec.europa.eu/energy/observatory/oil/bulletin_en.htm&quot;&gt;European Commision Oil Bulletin&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Brent oil spot prices from the the &lt;a href=&quot;http://www.eia.gov/dnav/pet/hist/LeafHandler.ashx?n=PET&amp;amp;s=RBRTE&amp;amp;f=D&quot;&gt;US Energy Information Administration&lt;/a&gt;.&lt;/li&gt;
&lt;li&gt;Euro to dollar exchange rates from &lt;a href=&quot;http://www.oanda.com/convert/fxhistory&quot;&gt;FXHistory&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The figure shows the weekly evolution during 2015 of prices of Brent oil  and average retail prices without taxes of 95 octane gas and gasoil in  Spain, all in c€ per liter.&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFkjKWadZ6HKHfqI9ceE5-Vl_VuJQ9kXjXpSpKq01zoS4divV0yShUXpN5R0aZCxeee2n9GQunF9elHAz3YbpDDrr9WuSnkZ1WPQBGybgOo502OH9PIm8D28Lq8hXoXaGGpOuYxJZbXR4/s1600/gas_prices.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;270&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFkjKWadZ6HKHfqI9ceE5-Vl_VuJQ9kXjXpSpKq01zoS4divV0yShUXpN5R0aZCxeee2n9GQunF9elHAz3YbpDDrr9WuSnkZ1WPQBGybgOo502OH9PIm8D28Lq8hXoXaGGpOuYxJZbXR4/s1600/gas_prices.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
For gasoline, the corresponding scatter plot of  Δ(gasoline price before taxes) against Δ(Brent price) is&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0185t0JI7RfJxe-5gbIxBKC0xqyqfVZ_N5eJwd-AwX345nuk7q0IIqZ_cWwljePTLdBoI5_fJLeBP6FolXrS2IlTI7u1q0Y9FRjbs3ntyN3gES4U3AkcUYoLrXRyjbtPV8RnHA9V9InE/s1600/dgasoline_dbrent.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;400&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi0185t0JI7RfJxe-5gbIxBKC0xqyqfVZ_N5eJwd-AwX345nuk7q0IIqZ_cWwljePTLdBoI5_fJLeBP6FolXrS2IlTI7u1q0Y9FRjbs3ntyN3gES4U3AkcUYoLrXRyjbtPV8RnHA9V9InE/s1600/dgasoline_dbrent.png&quot; width=&quot;393&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
with linear regressions for the entire graph and both semiplanes Δ(Brent price) ≥ 0 and ≤ 0, given by&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
overall → &lt;span style=&quot;font-style: italic;&quot;&gt;y&lt;/span&gt; = &lt;span style=&quot;font-style: italic;&quot;&gt;f&lt;/span&gt;&lt;sub&gt;&lt;/sub&gt;(&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;) = &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt;&lt;sub&gt;&lt;/sub&gt; + &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt; = −0.1210 + 0.2554&lt;span style=&quot;font-style: italic;&quot;&gt;x,&lt;/span&gt;&lt;br /&gt;
ΔBrent ≥ 0 → &lt;span style=&quot;font-style: italic;&quot;&gt;y&lt;/span&gt; = &lt;span style=&quot;font-style: italic;&quot;&gt;f&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt;(&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;) = &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt; + &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt; = 0.2866 − 0.0824&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;,&lt;br /&gt;
ΔBrent ≤ 0 → &lt;span style=&quot;font-style: italic;&quot;&gt;y&lt;/span&gt; = &lt;span style=&quot;font-style: italic;&quot;&gt;f&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt;(&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;) = &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt; + &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt; = 0.3552 + 0.4040&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Due to the outlier in the right lower corner (with date August 31), positive variations in oil price don&#39;t translate, in average, as positive increments in the price of gasoline. The most worrisome aspect is the fact that &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt; and are &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt; positive, which suggests an underlying trend to increase prices when oil is stable.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
For gasoil we have&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_V_DqVXcJuhpkAy9chupceKCS3H2voRlX-jLbd3qtPAdTUUZwVpgrbBMYxpS_QqXhb5enlExYLPRN84BucEuhlGOzlT_nSsVWqLVpZm9HJOUJsmm0gVBuxjt6j__-Ht5TdvlL57s6Uv4/s1600/dgasoil_dbrent.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;400&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj_V_DqVXcJuhpkAy9chupceKCS3H2voRlX-jLbd3qtPAdTUUZwVpgrbBMYxpS_QqXhb5enlExYLPRN84BucEuhlGOzlT_nSsVWqLVpZm9HJOUJsmm0gVBuxjt6j__-Ht5TdvlL57s6Uv4/s1600/dgasoil_dbrent.png&quot; width=&quot;393&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
with regressions&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
overall → &lt;span style=&quot;font-style: italic;&quot;&gt;y&lt;/span&gt; = &lt;span style=&quot;font-style: italic;&quot;&gt;f&lt;/span&gt;(&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;) = &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt; + &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt; = −0.0672 + 0.3538&lt;span style=&quot;font-style: italic;&quot;&gt;x,&lt;/span&gt;&lt;br /&gt;
ΔBrent ≥ 0 → &lt;span style=&quot;font-style: italic;&quot;&gt;y&lt;/span&gt; = &lt;span style=&quot;font-style: italic;&quot;&gt;f&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt;(&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;) = &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt; + &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt; = −0.2457 + 0.2013&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;,&lt;br /&gt;
ΔBrent ≤ 0 → &lt;span style=&quot;font-style: italic;&quot;&gt;y&lt;/span&gt; = &lt;span style=&quot;font-style: italic;&quot;&gt;f&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt;(&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;) = &lt;span style=&quot;font-style: italic;&quot;&gt;b&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt; + &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt;&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt; = 0.2468 + 0.3956&lt;span style=&quot;font-style: italic;&quot;&gt;x&lt;/span&gt;.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Again, no &quot;rocket and feather&quot; effect here (in fact,&amp;nbsp; &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;sub&gt;+&lt;/sub&gt; is slightly smaller than &lt;span style=&quot;font-style: italic;&quot;&gt;m&lt;/span&gt;&lt;sub&gt;−&lt;/sub&gt;). Variations around ΔBrent = 0 are fairly symmetrical and, seemingly, fair.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/5955401195404938553/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2016/01/gas-price-hysteresis-spain-2015.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5955401195404938553'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5955401195404938553'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2016/01/gas-price-hysteresis-spain-2015.html' title='Gas price hysteresis, Spain 2015'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFkjKWadZ6HKHfqI9ceE5-Vl_VuJQ9kXjXpSpKq01zoS4divV0yShUXpN5R0aZCxeee2n9GQunF9elHAz3YbpDDrr9WuSnkZ1WPQBGybgOo502OH9PIm8D28Lq8hXoXaGGpOuYxJZbXR4/s72-c/gas_prices.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-5192203284085864546</id><published>2015-12-28T19:54:00.000+01:00</published><updated>2015-12-29T10:12:39.306+01:00</updated><title type='text'>How likely?</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
Yesterday, &lt;a href=&quot;http://cup.cat/&quot;&gt;CUP&lt;/a&gt; political party held a general assembly to determine whether to support or not &lt;a href=&quot;https://en.wikipedia.org/wiki/Artur_Mas_i_Gavarr%C3%B3&quot;&gt;Artur Mas&#39;s&lt;/a&gt; candidacy to President of the Catalonian regional government. The final voting round among 3,030 representatives ended up in an &lt;a href=&quot;http://www.catalannewsagency.com/politics/item/cup-s-base-fails-to-reach-decision-on-mas-investiture&quot;&gt;exact 1,515/1,515 tie&lt;/a&gt;, leaving the question unsolved for the moment being. Such an unexpected result has prompted a flurry of Internet activity about the mathematical probability of its occurrence.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The question &quot;how likely was this result to happen?&quot; is of course unanswerable without a specification of the context (i.e. the probability space) we choose to frame the event. A plausible formulation is:&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
If a proportion &lt;i&gt;p&lt;/i&gt; of CUP voters are pro-Mas, how likely is it that a random sample based on 3,030 individuals yields a 50/50 tie?&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The simple answer (assuming the number of CUP voters is much larger that 3,030) is &lt;i&gt;P&lt;sub&gt;p&lt;/sub&gt;&lt;/i&gt;(1,015 | 3,030), where &lt;i&gt;P&lt;sub&gt;p&lt;/sub&gt;&lt;/i&gt;(&lt;i&gt;n&lt;/i&gt; | &lt;i&gt;N&lt;/i&gt;) is the &lt;a href=&quot;http://mathworld.wolfram.com/BinomialDistribution.html&quot;&gt;binomial distribution&lt;/a&gt; of &lt;i&gt;N&lt;/i&gt; Bernouilli trials with probability &lt;i&gt;p&lt;/i&gt; resulting in exactly &lt;i&gt;n&lt;/i&gt; successes.&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghyVEbaFVN9DbRqSDTYwW4QvaTu0kV4PKny5zYdSg7TNVCxmQiUvrPaeSMYx0yt_L6GnRoA93qhquCq5DuZetifXFMbzrOk9OhSrN1q-QGBUiZC2LASqDQC-fzXoUiR1zIYUK3U-pXbxU/s1600/prob_tie.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;293&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghyVEbaFVN9DbRqSDTYwW4QvaTu0kV4PKny5zYdSg7TNVCxmQiUvrPaeSMYx0yt_L6GnRoA93qhquCq5DuZetifXFMbzrOk9OhSrN1q-QGBUiZC2LASqDQC-fzXoUiR1zIYUK3U-pXbxU/s1600/prob_tie.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The figure shows this value for 40% ≤ &lt;i&gt;p&lt;/i&gt; ≤ 60%. At &lt;i&gt;p&lt;/i&gt; = 50%, which without further information is our best estimation of pro-Mas supporters among CUP voters, the probability of a tie is 1.45%. A deviation in &lt;i&gt;p&lt;/i&gt; of ±4% would have made this result virtually impossible.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
A slightly more interesting question is the following:&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
If a proportion &lt;i&gt;p&lt;/i&gt; of CUP voters are pro-Mas, how likely is a random sample of 3,030 individuals to misestimate the majority opinion? &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
When &lt;i&gt;p&lt;/i&gt; is in the vicinity of 50%, there is a non-negligible probability that the assembly vote come up with the wrong (i.e. against voters&#39; wishes) result. This probability is&lt;/div&gt;
&lt;div style=&quot;text-align: center;&quot;&gt;
&lt;i&gt;I&lt;sub&gt;p&lt;/sub&gt;&lt;/i&gt;(1,516, 1,515) if &lt;i&gt;p&lt;/i&gt; &amp;lt; 50%,&lt;br /&gt;
1 − &lt;i&gt;P&lt;sub&gt;p&lt;/sub&gt;&lt;/i&gt;(1,015 | 3,030) if &lt;i&gt;p&lt;/i&gt; = 50%, &lt;br /&gt;
&lt;i&gt;I&lt;/i&gt;&lt;sub&gt;1−&lt;/sub&gt;&lt;i&gt;&lt;sub&gt;p&lt;/sub&gt;&lt;/i&gt;(1,516, 1,515) if &lt;i&gt;p&lt;/i&gt; &amp;gt; 50%,&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
where &lt;i&gt;I&lt;sub&gt;p&lt;/sub&gt;&lt;/i&gt;(&lt;i&gt;a&lt;/i&gt;,&lt;i&gt;b&lt;/i&gt;) is the &lt;a href=&quot;http://mathworld.wolfram.com/RegularizedBetaFunction.html&quot;&gt;regularized beta function&lt;/a&gt;. The figure shows the corresponding graph for 3,030 representatives and 40% ≤ &lt;i&gt;p&lt;/i&gt; ≤ 60%.&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk8NMCt-NPdO16rXpz-ptO4eGsSEAEI-In0aagXUu60VWrPPPZlP9OXGcBlLBqLw8g-F9KGGDjXr5tmDev0jti-kdcESqCBS-q2EV-Z-6EfPimzG_pAEl5q9cWvUOwht1KfcMS2x_A_cI/s1600/prob_misestimation.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;293&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhk8NMCt-NPdO16rXpz-ptO4eGsSEAEI-In0aagXUu60VWrPPPZlP9OXGcBlLBqLw8g-F9KGGDjXr5tmDev0jti-kdcESqCBS-q2EV-Z-6EfPimzG_pAEl5q9cWvUOwht1KfcMS2x_A_cI/s1600/prob_misestimation.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The function shows a discontinuity at the singular (and zero-probability) event &lt;i&gt;p&lt;/i&gt; = 50%, in which case the assembly will yield the wrong result always except for the previously studied situation that there is an exact tie (so, the probability of misestimation is 1 − 1.45% = 98.55 %). Other than this, the likelihood of misestimation approaches 49%+ as &lt;i&gt;p&lt;/i&gt; tends to 50%. We have learnt that CUP voters are almost evenly divided between pro- and anti-Mas: if the difference between both positions is 0.7% or less, an assembly of 3,030 representatives such as held yesterday will fail to reflect the party&#39;s global position in more than 1 out of 5 cases.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/5192203284085864546/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2015/12/how-likely.html#comment-form' title='0 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5192203284085864546'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/5192203284085864546'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2015/12/how-likely.html' title='How likely?'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghyVEbaFVN9DbRqSDTYwW4QvaTu0kV4PKny5zYdSg7TNVCxmQiUvrPaeSMYx0yt_L6GnRoA93qhquCq5DuZetifXFMbzrOk9OhSrN1q-QGBUiZC2LASqDQC-fzXoUiR1zIYUK3U-pXbxU/s72-c/prob_tie.png" height="72" width="72"/><thr:total>0</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-2775924357848556949</id><published>2015-11-14T22:02:00.001+01:00</published><updated>2015-11-15T14:36:44.153+01:00</updated><title type='text'>SOA container for encapsulated C++ DOD</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
In a &lt;a href=&quot;http://bannalia.blogspot.com/2015/09/c-encapsulation-for-data-oriented.html&quot;&gt;previous entry&lt;/a&gt; we saw how to decouple the logic of a class from the access to its member data so that the latter can be laid out in a &lt;a href=&quot;https://www.youtube.com/watch?v=rX0ItVEVjHc&quot;&gt;DOD&lt;/a&gt;-friendly fashion for faster sequential processing. Instead of having a &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::vector&lt;/span&gt; of, say, particles, now we can store the different particle members (position, velocity, etc.) in separate containers. This unfortunately results in more cumbersome initialization code: whereas for the traditional, OOP approach particle creation and access is compact and nicely localized:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;std::vector&amp;lt;plain_particle&amp;gt; pp_;
...
for(std::size_t i=0;i&amp;lt;n;++i){
  pp_.push_back(plain_particle(...));
}
...
render(pp_.begin(),pp_.end());
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
when using DOD, in contrast, the equivalent code grows linearly with the number of members, even if most of it is boilerplate: &lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;std::vector&amp;lt;char&amp;gt; color_;
std::vector&amp;lt;int&amp;gt;  x_,y_,dx_,dy_;
...
for(std::size_t i=0;i&amp;lt;n;++i){
  color_.push_back(...);
  x_.push_back(...);
  y_.push_back(...);
  dx_.push_back(...);
  dy_.push_back(...);  
}
...
auto beg_=make_pointer&amp;lt;particle&amp;gt;(
  access(&amp;amp;color_[0],&amp;amp;x_[0],&amp;amp;y_[0],&amp;amp;dx_[0],&amp;amp;dy_[0])),
auto end_=beg_dod+n;
render(beg_,end_);
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
We would like to rely on a container using SOA (&lt;i&gt;structure of arrays&lt;/i&gt;) for its storage that allows us to retain our original OOP syntax:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;using access=dod::access&amp;lt;color,x,y,dx,dy&amp;gt;;
dod::vector&amp;lt;particle&amp;lt;access&amp;gt;&amp;gt; p_;
...
for(std::size_t i=0;i&amp;lt;n;++i){
  p_.emplace_back(...);
}
...
render(p_.begin(),p_.end());
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Note that particles are inserted into the container using &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;emplace_back&lt;/span&gt; rather than &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;push_back&lt;/span&gt;: this is due to the fact that a &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;particle&lt;/span&gt; object (which &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;push_back&lt;/span&gt; accepts as its argument) cannot be created out of the blue without its constituent members being previously stored somewhere; &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;emplace_back&lt;/span&gt;, on the other hand, does not suffer from this chicken-and-egg problem.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The implementation of such a container class is fairly straightfoward (limited here to the operations required to make the previous code work):&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;namespace dod{

template&amp;lt;typename Access&amp;gt;
class vector_base;

template&amp;lt;&amp;gt;
class vector_base&amp;lt;access&amp;lt;&amp;gt;&amp;gt;
{
protected:
  access&amp;lt;&amp;gt; data(){return {};}
  void emplace_back(){}
};

template&amp;lt;typename Member0,typename... Members&amp;gt;
class vector_base&amp;lt;access&amp;lt;Member0,Members...&amp;gt;&amp;gt;:
  protected vector_base&amp;lt;access&amp;lt;Members...&amp;gt;&amp;gt;
{
  using super=vector_base&amp;lt;access&amp;lt;Members...&amp;gt;&amp;gt;;
  using type=typename Member0::type;
  using impl=std::vector&amp;lt;type&amp;gt;;
  using size_type=typename impl::size_type;
  impl v;
  
protected:
  access&amp;lt;Member0,Members...&amp;gt; data()
  {
    return {v.data(),super::data()};
  }

  size_type size()const{return v.size();}

  template&amp;lt;typename Arg0,typename... Args&amp;gt;
  void emplace_back(Arg0&amp;amp;&amp;amp; arg0,Args&amp;amp;&amp;amp;... args){
    v.emplace_back(std::forward&amp;lt;Arg0&amp;gt;(arg0));
    try{
      super::emplace_back(std::forward&amp;lt;Args&amp;gt;(args)...);
    }
    catch(...){
      v.pop_back();
      throw;
    }
  }
};
  
template&amp;lt;typename T&amp;gt; class vector;
 
template&amp;lt;template &amp;lt;typename&amp;gt; class Class,typename Access&amp;gt; 
class vector&amp;lt;Class&amp;lt;Access&amp;gt;&amp;gt;:protected vector_base&amp;lt;Access&amp;gt;
{
  using super=vector_base&amp;lt;Access&amp;gt;;
  
public:
  using iterator=pointer&amp;lt;Class&amp;lt;Access&amp;gt;&amp;gt;;
  
  iterator begin(){return super::data();}
  iterator end(){return this-&amp;gt;begin()+super::size();}
  using super::emplace_back;
};

} // namespace dod
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dod::vector&amp;lt;Class&amp;lt;Members...&amp;gt;&amp;gt;&lt;/span&gt; derives from an implementation class that holds a &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::vector&lt;/span&gt; for each of the &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;Members&lt;/span&gt; declared. Inserting elements is just a simple matter of multiplexing to the vectors, and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;begin&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;end&lt;/span&gt; return &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dod::pointer&lt;/span&gt;s to this structure of arrays. From the point of view of the user all the necessary magic is hidden by the framework and DOD processing becomes nearly identical in syntax to OOP.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
We provide a &lt;a href=&quot;https://www.dropbox.com/s/q4zbrvtxpymi8sk/dod_vector.cpp?dl=0&quot;&gt;test program&lt;/a&gt; that exercises &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dod::vector&lt;/span&gt; against the classical OOP approach based on a &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;std::vector&lt;/span&gt; of plain (i.e., non DOD) particles. Results are the same as &lt;a href=&quot;http://bannalia.blogspot.com/2015/09/c-encapsulation-for-data-oriented.html&quot;&gt;previously discussed&lt;/a&gt; when we used DOD with manual initialization, that is, there is no abstraction penalty associated to using &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dod::vector&lt;/span&gt;, so we won&#39;t present any additional figures here. &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The framework we have constructed so far provides the bare minimum needed to test the ideas presented. In order to be fully usable there are various aspects that should be expanded upon:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;access&amp;lt;Members...&amp;gt;&lt;/span&gt; just considers the case where each member is stored separately. Sometimes the most efficient layout will call for mixed scenarios where some of the members are grouped together. This can be modelled, for instance, by having &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;member&lt;/span&gt; accept multiple pieces of data in its declaration.&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dod::pointer&lt;/span&gt; does not properly implement const access, that is, &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;pointer&amp;lt;const particle&amp;lt;...&amp;gt;&amp;gt;&lt;/span&gt; does not compile.&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dod::vector&lt;/span&gt; should be implemented to provide the full interface of a proper vector class. &lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
All of this can be in principle tackled without serious design dificulties.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/2775924357848556949/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2015/11/soa-container-for-encapsulated-c-dod.html#comment-form' title='3 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/2775924357848556949'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/2775924357848556949'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2015/11/soa-container-for-encapsulated-c-dod.html' title='SOA container for encapsulated C++ DOD'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><thr:total>3</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-7164304846261577453</id><published>2015-09-06T12:13:00.000+02:00</published><updated>2015-09-09T08:27:20.819+02:00</updated><title type='text'>C++ encapsulation for Data-Oriented Design: performance</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
(Many thanks to &lt;a href=&quot;https://plus.google.com/+ManuS%C3%A1nchezManu343726/posts&quot;&gt;Manu Sánchez&lt;/a&gt; for his help with running tests and analyzing results.)&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
In a &lt;a href=&quot;http://bannalia.blogspot.com/2015/08/c-encapsulation-for-data-oriented-design.html&quot;&gt;past entry&lt;/a&gt;, we implemented a little C++ framework that allows us to do &lt;a href=&quot;https://www.youtube.com/watch?v=rX0ItVEVjHc&quot;&gt;DOD&lt;/a&gt; while retaining some of the encapsulation benefits and the general look and feel of traditional object-based programming. We complete here the framework by adding a critical piece from the point of view of usability, namely the ability to process sequences of DOD entities with as terse a syntax as we would have in OOP.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
To enable DOD for a particular class (like the &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&lt;/span&gt; we used in the previous entry), i.e., to distribute its different data members in separate memory locations, we change the class source code to turn it into a class template &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&amp;lt;Access&amp;gt;&lt;/span&gt; where &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;Access&lt;/span&gt; is a framework-provided entity in charge of granting access to the external data members with a similar syntax as if they were an integral part of the class itself. Now, &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&amp;lt;Access&amp;gt;&lt;/span&gt; is no longer a regular class with &lt;a href=&quot;https://en.wikipedia.org/wiki/Value_semantics&quot;&gt;&lt;i&gt;value semantics&lt;/i&gt;&lt;/a&gt;, but a mere proxy to the external data without ownership to it. Importantly, it is the members and not the &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&lt;/span&gt; objects that are stored: &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&lt;/span&gt;s are constructed on the fly when needed to use its interface in order to process the data. So, code like&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;for(const auto&amp;amp; p:particle_)p.render();
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
cannot possibly work because the application does not have any &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle_&lt;/span&gt; container to begin with: instead, the information is stored in separate locations:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;std::vector&amp;lt;char&amp;gt; color_;
std::vector&amp;lt;int&amp;gt;  x_,y_,dx_,dy_;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
and &quot;traversing&quot; the particles requires that we go through the associated containers in parallel and invoke &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render&lt;/span&gt; on a temporary &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&lt;/span&gt; object constructed out of them:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;auto itc=&amp;amp;color_[0],ec=itc+color_.size();
auto itx=&amp;amp;x_[0];
auto ity=&amp;amp;y_[0];
auto itdx=&amp;amp;dx_[0];
auto itdy=&amp;amp;dy_[0];
  
for(;itc!=ec;++itc,++itx,++ity,++itdx,++itdy){
  auto p=make_particle(
    access&amp;lt;color,x,y,dx,dy&amp;gt;(itc,itx,ity,itdx,itdy));
  p.render();
}
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Fortunately, this boilerplate code can be hidden by the framework by using these auxiliary constructs:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename T&amp;gt; class pointer;

template&amp;lt;template &amp;lt;typename&amp;gt; class Class,typename Access&amp;gt;
class pointer&amp;lt;Class&amp;lt;Access&amp;gt;&amp;gt;
{
  // behaves as Class&amp;lt;Access&amp;gt;&amp;gt;*
};

template&amp;lt;template &amp;lt;typename&amp;gt; class Class,typename Access&amp;gt;
pointer&amp;lt;Class&amp;lt;Access&amp;gt;&amp;gt; make_pointer(const Access&amp;amp; a)
{
  return pointer&amp;lt;Class&amp;lt;Access&amp;gt;&amp;gt;(a);
}
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
We won&#39;t delve into the implementation details of &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;pointer&lt;/span&gt; (the interested reader can see the actual code in the test program given below): from the point of view of the user, this utility class accepts an access entity, which is a collection of pointers to the data members plus an offset member (this offset has been added to the former version of the framework), it keeps everything in sync when doing pointer arithmetic and dereferences to a temporary &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&lt;/span&gt; object. The resulting user code is as simple as it gets:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;auto n=color_.size();
auto beg_=make_pointer&amp;lt;particle&amp;gt;(access&amp;lt;color,x,y,dx,dy&amp;gt;(
  &amp;amp;color_[0],&amp;amp;x_[0],&amp;amp;y_[0],&amp;amp;dx_[0],&amp;amp;dy_[0]));
auto end_=beg_+n;
  
for(auto it=beg_;it!=end_;++it)it-&amp;gt;render();
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Index-based traversal is also possible:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;for(std::size_t i=0;i&amp;lt;n;++i)beg_[i].render();
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Once the containers are populated and &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;beg_&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;end_&lt;/span&gt; defined, user code can handle &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;particle&lt;/span&gt;s as if they were stored in [&lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;beg_&lt;/span&gt;, &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;end_&lt;/span&gt;), thus effectively isolated from the fact that the actual data is scattered around different containers for maximum processing performance.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Are we paying an abstraction penalty for the convenience this framework affords? There are two sources of concern:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Even though traversal code is &lt;i&gt;in principle&lt;/i&gt; equivalent to hand-written DOD code, compilers might not be able to optimize all the template scaffolding away.&lt;/li&gt;
&lt;li&gt;Traversing with &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;access&amp;lt;color,x,y,dx,dy&amp;gt;&lt;/span&gt; for rendering when only &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;color&lt;/span&gt;, &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;x&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;y&lt;/span&gt; are needed (because &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render&lt;/span&gt; does not access &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dx&lt;/span&gt; or &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dy&lt;/span&gt;) involves iterating over &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dx_&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dy_&lt;/span&gt; without actually accessing either one: again, the compiler might or might not optimize this extra code.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
We provide a &lt;a href=&quot;https://www.dropbox.com/s/o3arzt9kbyhor4t/dod_perf.cpp?dl=0&quot;&gt;test program&lt;/a&gt; (Boost required) that measures the performance of this framework against some alternatives. The looped-over &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render&lt;/span&gt; procedure simply updates a global variable so that resulting execution times are basically those of the produced iteration code. The different options compared are:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul style=&quot;list-style-type: none;&quot;&gt;
&lt;li&gt;&lt;span style=&quot;color: #010101;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;oop&lt;/span&gt;: iteration over a traditional object-based structure&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #ed2e2d;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;raw&lt;/span&gt;: hand-written data-processing loop&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #008c47;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dod&lt;/span&gt;: DOD framework with &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;access&amp;lt;color,x,y,dx,dy&amp;gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #1859a9;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render_dod&lt;/span&gt;: DOD framework with&amp;nbsp; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;access&amp;lt;color,x,y&amp;gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #f37d22;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;oop[i]&lt;/span&gt;: index-based access instead of iterator traversal &lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #662c91;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;raw[i]&lt;/span&gt;: hand-written index-based loop&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #a11d20;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dod[i]&lt;/span&gt;: index-based with &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;access&amp;lt;color,x,y,dx,dy&amp;gt;&lt;/span&gt;&lt;/li&gt;
&lt;li&gt;&lt;span style=&quot;color: #b33893;&quot;&gt;⬛&lt;/span&gt; &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render_dod[i]&lt;/span&gt;: index-based with&amp;nbsp;&lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;access&amp;lt;color,x,y&amp;gt;&lt;/span&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The difference between &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dod&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render_dod&lt;/span&gt; (and the same applies to their index-based variants) is that the latter keeps access only to the data members strictly required by &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render&lt;/span&gt;: if the compiler were not able to optimize unnecessary pointer manipulations in &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;dod&lt;/span&gt;, &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;render_dod&lt;/span&gt; would be expected to be faster; the drawback is that this would require fine tuning the access entity for each member function. &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;a href=&quot;https://plus.google.com/+ManuS%C3%A1nchezManu343726/posts&quot;&gt;Manu Sánchez&lt;/a&gt; has set up an extensive testing environment to build and run the program using different compilers and machines:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://github.com/Manu343726/cpp-dod-tests&quot;&gt;GitHub repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The figures show the release-mode execution times of the eight options described above when traversing sequences of &lt;i&gt;n&lt;/i&gt; = 10&lt;sup&gt;4&lt;/sup&gt;, 10&lt;sup&gt;5&lt;/sup&gt;, 10&lt;sup&gt;6&lt;/sup&gt; and 10&lt;sup&gt;7&lt;/sup&gt; particles.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;b&gt;GCC 5.1, MinGW, Intel Core i7-4790k @4.5GHz&lt;/b&gt;&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggB6UIXA9mqqHHrbvjMELQrCxZLvBO45I1SMvEaHaIYEPwtMEiLbs48hxYxvCv7BY3FYIoueFBTUmj-ZKCbSw9DwaMJNyl62-UVxEZJ0rXdFzOeF5uiz0N3n8Z7WeaeglLieM-JH7XgAk/s1600/gcc5.1-i7.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;258&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggB6UIXA9mqqHHrbvjMELQrCxZLvBO45I1SMvEaHaIYEPwtMEiLbs48hxYxvCv7BY3FYIoueFBTUmj-ZKCbSw9DwaMJNyl62-UVxEZJ0rXdFzOeF5uiz0N3n8Z7WeaeglLieM-JH7XgAk/s1600/gcc5.1-i7.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Execution times / number of elements.&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
As expected, OOP is the slowest due to cache effects. The rest of options are basically equivalent, which shows that GCC is able to entirely optimize away the syntactic niceties brought in by our DOD framework.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;b&gt;MSVC 14.0, Windows, Intel Core i7-4790k @4.5GHz&lt;/b&gt;&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdQFhMBQGjyHdHYpP6lZPl2QTbyKlkJWspCegAOj18JYaqSqyqiB_gtspMg39QucaWzSzqT-TH5vozGpWD3WYW2lEwTY_7rjXcWVi0DPRSwZkkCwtcTmVFJQi9r_7ma8rRtKJvb_HDf5s/s1600/msvc14-i7.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;258&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdQFhMBQGjyHdHYpP6lZPl2QTbyKlkJWspCegAOj18JYaqSqyqiB_gtspMg39QucaWzSzqT-TH5vozGpWD3WYW2lEwTY_7rjXcWVi0DPRSwZkkCwtcTmVFJQi9r_7ma8rRtKJvb_HDf5s/s1600/msvc14-i7.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Execution times / number of elements.&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Here, again, all DOD options are roughly equivalent, although &lt;span style=&quot;font-family: &amp;quot;Courier New&amp;quot;,Courier,monospace;&quot;&gt;raw&lt;/span&gt; (pointer-based hand-written loop) is slightly slower. Curiously enough, MSVC is much worse at optimizing DOD with respect to OOP than GCC is, with execution times up to 4 times higher for &lt;i&gt;n&lt;/i&gt; = 10&lt;sup&gt;4&lt;/sup&gt; and 1.3 times higher for &lt;i&gt;n&lt;/i&gt; = 10&lt;sup&gt;7&lt;/sup&gt;, the latter scenario being presumably dominated by cache efficiencies. &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;b&gt;GCC 5.2, Linux, AMD A6-1450 APU @1.0 GHz&lt;/b&gt;&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPIi8qAiM2NwdHfrOGJ6gbuomW4XAFgFT2hD7dP-ICaHBiZKj0eO7kMbDZ_dRKR986BxKONXIxjRO9IXoF8WWcPGgbeQf4guYa3v_YpifTr0fwyKIC_Dj86hSZdeNijWyQ5NTfiIq4FWU/s1600/gcc5.2-a6.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;258&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgPIi8qAiM2NwdHfrOGJ6gbuomW4XAFgFT2hD7dP-ICaHBiZKj0eO7kMbDZ_dRKR986BxKONXIxjRO9IXoF8WWcPGgbeQf4guYa3v_YpifTr0fwyKIC_Dj86hSZdeNijWyQ5NTfiIq4FWU/s1600/gcc5.2-a6.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Execution times / number of elements.&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
From a qualitative point of view, these results are in line with those obtained for GCC 5.1 under an Intel Core i7, although as the AMD A6 is a much less powerful processor execution times are higher (×8-10 for  &lt;i&gt;n&lt;/i&gt; = 10&lt;sup&gt;4&lt;/sup&gt;, ×4-5.5 for  &lt;i&gt;n&lt;/i&gt; = 10&lt;sup&gt;7&lt;/sup&gt;).&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;b&gt;Clang 3.6, Linux, AMD A6-1450 APU @1.0 GHz&lt;/b&gt;&lt;/div&gt;
&lt;table align=&quot;center&quot; cellpadding=&quot;0&quot; cellspacing=&quot;0&quot; class=&quot;tr-caption-container&quot; style=&quot;margin-left: auto; margin-right: auto; text-align: center;&quot;&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style=&quot;text-align: center;&quot;&gt;&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFnSY-Dnyo96rG2LYHZlfg68d7RIzvo1VU_9QNG4cVEFUBCaMew0TBS0CeMpG-Iqz1NTl-CoGtHTkbIcAwezLmrxZMBbEVuIsnrzWYjeqzR2pQozf3rNC3Mng3cDNdyPTsfKWrBktm7qs/s1600/clang3.6-a6.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: auto; margin-right: auto;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;258&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgFnSY-Dnyo96rG2LYHZlfg68d7RIzvo1VU_9QNG4cVEFUBCaMew0TBS0CeMpG-Iqz1NTl-CoGtHTkbIcAwezLmrxZMBbEVuIsnrzWYjeqzR2pQozf3rNC3Mng3cDNdyPTsfKWrBktm7qs/s1600/clang3.6-a6.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;tr-caption&quot; style=&quot;text-align: center;&quot;&gt;&lt;b&gt;Execution times / number of elements.&lt;/b&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
As it happens with the rest of compilers, DOD options (both manual and framework-supported) perform equally well. However, the comparison with GCC 5.2 on the same machine shows important differences: iterator-based OOP is faster (×1.1-1.4) in Clang, index-based OOP yields the same results for both compilers, and the DOD options in Clang are consistently slower (×2.3-3.4) than in GCC, to the point that OOP outperforms them for low values of &lt;i&gt;n&lt;/i&gt;. A detailed analysis of the assembly code produced would probably gain us more insight into these contrasting behaviors: interested readers can access the resulting assembly listings at the associated &lt;a href=&quot;https://github.com/Manu343726/cpp-dod-tests&quot;&gt;GitHub repository&lt;/a&gt;.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/7164304846261577453/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2015/09/c-encapsulation-for-data-oriented.html#comment-form' title='9 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/7164304846261577453'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/7164304846261577453'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2015/09/c-encapsulation-for-data-oriented.html' title='C++ encapsulation for Data-Oriented Design: performance'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEggB6UIXA9mqqHHrbvjMELQrCxZLvBO45I1SMvEaHaIYEPwtMEiLbs48hxYxvCv7BY3FYIoueFBTUmj-ZKCbSw9DwaMJNyl62-UVxEZJ0rXdFzOeF5uiz0N3n8Z7WeaeglLieM-JH7XgAk/s72-c/gcc5.1-i7.png" height="72" width="72"/><thr:total>9</thr:total></entry><entry><id>tag:blogger.com,1999:blog-2715968472735546962.post-8894064543265654503</id><published>2015-08-31T22:03:00.000+02:00</published><updated>2015-12-11T18:57:41.989+01:00</updated><title type='text'>C++ encapsulation for Data-Oriented Design</title><content type='html'>&lt;div style=&quot;text-align: left;&quot;&gt;
Data-Oriented Design, or DOD for short, seeks to maximize efficiency by laying out data in such a way that their processing is as streamlined as possible. This is often against the usual object-based principles that naturally lead to grouping the information accordingly to the user-domain entities that it models. Consider for instance a game where large quantities of particles are rendered and moved around:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;class particle
{  
  char  color;
  int   x;
  int   y;
  int   dx;
  int   dy;
public:

  static const int max_x=200;
  static const int max_y=100;
    
  particle(char color_,int x_,int y_,int dx_,int dy_):
    color(color_),x(x_),y(y_),dx(dx_),dy(dy_)
  {}

  void render()const
  {
    // for explanatory purposes only: dump to std::cout
    std::cout&amp;lt;&amp;lt;&quot;[&quot;&amp;lt;&amp;lt;x&amp;lt;&amp;lt;&quot;,&quot;&amp;lt;&amp;lt;y&amp;lt;&amp;lt;&quot;,&quot;&amp;lt;&amp;lt;int(color)&amp;lt;&amp;lt;&quot;]\n&quot;;
  }

  void move()
  {
    x+=dx;
    if(x&amp;lt;0){
      x*=-1;
      dx*=-1;
    }
    else if(x&amp;gt;max_x){
      x=2*max_x-x;
      dx*=-1;      
    }
    
    y+=dy;
    if(y&amp;lt;0){
      y*=-1;
      dx*=-1;
    }
    else if(y&amp;gt;max_y){
      y=2*max_y-y;
      dy*=-1;      
    }
  }
};
...
// game particles
std::vector&amp;lt;particle&amp;gt; particles;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
In the rendering loop, the program might do:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;for(const auto&amp;amp; p:particles)p.render();
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Trivial as it seems, the execution speed of this approach is nevertheless suboptimal. The memory layout for &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;particles&lt;/span&gt; looks like: &lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt68uBCZZyMatk9BeUYH8OphLRNT-jPaRXuxEsaGKIRpG0dTNK-d7PZnFwnhEy1gTvqYiH83FlFS0TdUGMEt6DdQnrUm1ST8EP7F6wQfb4OoawNQZSt_DjoQbrsuUcvOdWf2f1QAHW4Pk/s1600/particles.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;13&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt68uBCZZyMatk9BeUYH8OphLRNT-jPaRXuxEsaGKIRpG0dTNK-d7PZnFwnhEy1gTvqYiH83FlFS0TdUGMEt6DdQnrUm1ST8EP7F6wQfb4OoawNQZSt_DjoQbrsuUcvOdWf2f1QAHW4Pk/s1600/particles.png&quot; width=&quot;400&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
which, when traversed in the rendering loop, results in 47% of the data cached by the CPU (the part corresponding to &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dx&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dy&lt;/span&gt;, in white) not being used, or even more if padding occurs. A more intelligent layout based on 5 different vectors&lt;/div&gt;
&lt;div class=&quot;separator&quot; style=&quot;clear: both; text-align: center;&quot;&gt;
&lt;a href=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgj5hGuhNiZfC3r6quhj7PbCMEngilxfSBwDWyOu5n3Zr5nFwDAz0Di5KFJ44LE9C4faTvegCthr3ZjLexN8dY7-0XpMYmhpqokyKRCB1UUghzioDN0TeLKg_XM9fqolN0mk5xaRe5ADU/s1600/particles+interlaced.png&quot; imageanchor=&quot;1&quot; style=&quot;margin-left: 1em; margin-right: 1em;&quot;&gt;&lt;img border=&quot;0&quot; height=&quot;131&quot; src=&quot;https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgj5hGuhNiZfC3r6quhj7PbCMEngilxfSBwDWyOu5n3Zr5nFwDAz0Di5KFJ44LE9C4faTvegCthr3ZjLexN8dY7-0XpMYmhpqokyKRCB1UUghzioDN0TeLKg_XM9fqolN0mk5xaRe5ADU/s200/particles+interlaced.png&quot; width=&quot;200&quot; /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
allows the needed data, and only this, to be cached in three parallel cache lines, thus maximizing occupancy and minimizing misses. For the moving loop, it is a different set of data vectors that must be provided. DOD is increasingly popular, in particular in very demanding areas such as game programming. Mike Acton&#39;s &lt;a href=&quot;https://www.youtube.com/watch?v=rX0ItVEVjHc&quot;&gt;presentation&lt;/a&gt; on DOD and C++ is an excellent introduction to the principles of data orientation.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The problem with DOD is that encapsulation is lost: rather than being nicely packed in contiguous chunks of memory whose lifetime management is heavily supported by the language rules, &quot;objects&quot; now live as virtual entities with disemboweled, scattered pieces of information floating around in separate data structures. Methods acting on the data need to publish the exact information they require as part of their interface, and it is the responsibility of the user to locate it and provide it. We want to explore ways to remedy this situation by allowing a modest level of object encapsulation compatible with DOD. &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Roughly speaking, in C++ an object serves two different purposes:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Providing a public interface (a set of member functions) acting on the associated data.&lt;/li&gt;
&lt;li&gt;Keeping access to the data and managing its lifetime.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Both roles are mediated by the &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;this&lt;/span&gt; pointer. In fact, executing a member function on an object&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;x.f(args...);
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
is conceptually equivalent to invoking a function with an implicit extra argument&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;X::f(this,args...);
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
where the data associated to &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;x&lt;/span&gt;, assumed to be contiguous, is pointed to by &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;this&lt;/span&gt;. We can break this intermediation by letting objects be supplied with an &lt;i&gt;access&lt;/i&gt; entity replacing &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;this&lt;/span&gt; for the purpose of reaching out to the information. We begin with a purely syntactic device:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename T,int Tag=0&amp;gt;
struct member
{
  using type=T;
  static const int tag=Tag;
};
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;member&amp;lt;T,Tag&amp;gt;&lt;/span&gt; will be used to specify that a given class has a piece of information with type &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;T&lt;/span&gt;. &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;Tag&lt;/span&gt; is needed to tell apart different members of the same type (for instance, &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;particle&lt;/span&gt; has four different members of type &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;int&lt;/span&gt;, namely &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;x&lt;/span&gt;, &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;y&lt;/span&gt;, &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dx&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;dy&lt;/span&gt;). Now, the following class:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename Member&amp;gt;
class access
{
  using type=typename Member::type;
  type* p;

public:
  access(type* p):p(p){}
  
  type&amp;amp;       get(Member){return *p;}
  const type&amp;amp; get(Member)const{return *p;}
};
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
stores a pointer to a piece of data accessing the specified member. This can be easily expanded to accommodate for more than one member:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename... Members&amp;gt;class access;

template&amp;lt;typename Member&amp;gt;
class access&amp;lt;Member&amp;gt;
{
  using type=typename Member::type;
  type* p;

public:
  access(type* p):p(p){}
  
  type&amp;amp;       get(Member){return *p;}
  const type&amp;amp; get(Member)const{return *p;}
};

template&amp;lt;typename Member0,typename... Members&amp;gt;
class access&amp;lt;Member0,Members...&amp;gt;:
  public access&amp;lt;Member0&amp;gt;,access&amp;lt;Members...&amp;gt;
{
public:
  template&amp;lt;typename Arg0,typename... Args&amp;gt;
  access(Arg0&amp;amp;&amp;amp; arg0,Args&amp;amp;&amp;amp;... args):
    access&amp;lt;Member0&amp;gt;(std::forward&amp;lt;Arg0&amp;gt;(arg0)),
    access&amp;lt;Members...&amp;gt;(std::forward&amp;lt;Args&amp;gt;(args)...)
  {}
  
  using access&amp;lt;Member0&amp;gt;::get;
  using access&amp;lt;Members...&amp;gt;::get;
};
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
To access, say, the data labeled as &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;member&amp;lt;int,0&amp;gt;&lt;/span&gt; we need to write &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;get(member&amp;lt;int,0&amp;gt;())&lt;/span&gt;. The price we have to pay for having data scattered around memory is that the access entity holds several pointers, one for member: on the other hand, the resulting objects, as we will see, really behave as on-the-fly proxies to their associated information, so access entities will seldom be stored. &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;particle&lt;/span&gt; can be rewritten so that data is accessed through a generic access object:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename Access&amp;gt;
class particle:Access
{
  using Access::get;
  
  using color=member&amp;lt;char,0&amp;gt;;
  using x=member&amp;lt;int,0&amp;gt;;
  using y=member&amp;lt;int,1&amp;gt;;
  using dx=member&amp;lt;int,2&amp;gt;;
  using dy=member&amp;lt;int,3&amp;gt;;

public:

  static const int max_x=200;
  static const int max_y=100;

  particle(const Access&amp;amp; a):Access(a){}

  void render()const
  {
    std::cout&amp;lt;&amp;lt;&quot;[&quot;&amp;lt;&amp;lt;get(x())&amp;lt;&amp;lt;&quot;,&quot;
      &amp;lt;&amp;lt;get(y())&amp;lt;&amp;lt;&quot;,&quot;&amp;lt;&amp;lt;int(get(color()))&amp;lt;&amp;lt;&quot;]\n&quot;;
  }

  void move()
  {
    get(x())+=get(dx());
    if(get(x())&amp;lt;0){
      get(x())*=-1;
      get(dx())*=-1;
    }
    else if(get(x())&amp;gt;max_x){
      get(x())=2*max_x-get(x());
      get(dx())*=-1;      
    }
    
    get(y())+=get(dy());
    if(get(y())&amp;lt;0){
      get(y())*=-1;
      get(dy())*=-1;
    }
    else if(get(y())&amp;gt;max_y){
      get(y())=2*max_y-get(y());
      get(dy())*=-1;      
    }
  }
};

template&amp;lt;typename Access&amp;gt;
particle&amp;lt;Access&amp;gt; make_particle(Access&amp;amp;&amp;amp; a)
{
  return particle&amp;lt;Access&amp;gt;(std::forward&amp;lt;Access&amp;gt;(a));
}
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The transformations that need be done on the source code are not many:&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
&lt;ul&gt;
&lt;li&gt;Turn the class into a class template dependent on an access entity from which it derives.&lt;/li&gt;
&lt;li&gt;Rather than declaring internal data members, define the corresponding &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;member&lt;/span&gt; labels.&lt;/li&gt;
&lt;li&gt;Delete former OOP constructors define just one constructor taking an access object as its only data member.&lt;/li&gt;
&lt;li&gt;Replace mentions of data member by their corresponding access member function invocation (in the example, substitute &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;get(color())&lt;/span&gt; for &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;color&lt;/span&gt;, &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;get(x())&lt;/span&gt; for &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;x&lt;/span&gt;, etc.)&lt;/li&gt;
&lt;li&gt;For convenience&#39;s sake, provide a &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;make&lt;/span&gt; template function (in the example &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;make_particle&lt;/span&gt;) to simplify object creation.&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Observe how this woks in practice:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;using color=member&amp;lt;char,0&amp;gt;;
using x=member&amp;lt;int,0&amp;gt;;
using y=member&amp;lt;int,1&amp;gt;;
using dx=member&amp;lt;int,2&amp;gt;;
using dy=member&amp;lt;int,3&amp;gt;;

char color_=5;
int  x_=20,y_=40,dx_=2,dy_=-1;

auto p=make_particle(access&amp;lt;color,x,y&amp;gt;(&amp;amp;color_,&amp;amp;x_,&amp;amp;y_));
auto q=make_particle(access&amp;lt;x,y,dx,dy&amp;gt;(&amp;amp;x_,&amp;amp;y_,&amp;amp;dx_,&amp;amp;dy_));
p.render();
q.move();
p.render();
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
The particle data now lives externally as a bunch of separate variables (or, in a more real-life scenario, stored in containers). &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;p&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;q&lt;/span&gt; act as proxies to the same information (i.e., they don&#39;t copy data internally) but other than this they provide the same interface as the OOP version of &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;particle&lt;/span&gt;, and can be used similarly. Note that the two objects specify different sets of access members, as required by &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;render&lt;/span&gt; and &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;move&lt;/span&gt;, respectively. So, the following&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;q.render(); // error
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
would result in a compile time error as &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;render&lt;/span&gt; accesses data that &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;q&lt;/span&gt; does not provide. Of course we can do&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;auto p=make_particle(
         access&amp;lt;color,x,y,dx,dy&amp;gt;(&amp;amp;color_,&amp;amp;x_,&amp;amp;y_,&amp;amp;dx_,&amp;amp;dy_)),
     q=p;
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
so that the resulting objects can take advantage of the entire &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;particle&lt;/span&gt; interface. In later entries we will see how this need not affect performance in traversal algorithms. A nice side effect of this technique is that, when a DOD class is added extra data, former code will continue to work as long as this data is only used in new member functions of the class.&lt;/div&gt;
Implementing DOD enablement as a template policy also allows us to experiment with alternative access semantics. For instance, the &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;tuple_storage&lt;/span&gt; utility&lt;br /&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;template&amp;lt;typename Tuple,std::size_t Index,typename... Members&amp;gt;
class tuple_storage_base;

template&amp;lt;typename Tuple,std::size_t Index&amp;gt;
class tuple_storage_base&amp;lt;Tuple,Index&amp;gt;:public Tuple
{
  struct inaccessible{};
public:
  using Tuple::Tuple;
  
  void get(inaccessible);
  
  Tuple&amp;amp;       tuple(){return *this;}
  const Tuple&amp;amp; tuple()const{return *this;}
};

template&amp;lt;
  typename Tuple,std::size_t Index,
  typename Member0,typename... Members
&amp;gt;
class tuple_storage_base&amp;lt;Tuple,Index,Member0,Members...&amp;gt;:
  public tuple_storage_base&amp;lt;Tuple,Index+1,Members...&amp;gt;
{
  using super=tuple_storage_base&amp;lt;Tuple,Index+1,Members...&amp;gt;;
  using type=typename Member0::type;

public:
  using super::super;
  using super::get;
  
  type&amp;amp;       get(Member0)
                {return std::get&amp;lt;Index&amp;gt;(this-&amp;gt;tuple());}
  const type&amp;amp; get(Member0)const
                {return std::get&amp;lt;Index&amp;gt;(this-&amp;gt;tuple());}  
};

template&amp;lt;typename... Members&amp;gt;
class tuple_storage:
  public tuple_storage_base&amp;lt;
    std::tuple&amp;lt;typename Members::type...&amp;gt;,0,Members...
  &amp;gt;
{
  using super=tuple_storage_base&amp;lt;
    std::tuple&amp;lt;typename Members::type...&amp;gt;,0,Members...
  &amp;gt;;
  
public:
  using super::super;
};
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
can we used to replace the external access policy with an object containing the data proper:&lt;/div&gt;
&lt;pre class=&quot;prettyprint&quot;&gt;using storage=tuple_storage&amp;lt;color,x,y,dx,dy&amp;gt;;
auto r=make_particle(storage(3,100,10,10,-15));
auto s=r;
r.render();
r.move();
r.render();
s.render(); // different data than r
&lt;/pre&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
which effectively brings us back the old OOP class with ownership semantics. (Also, it is easy to implement an access policy on top of &lt;span style=&quot;font-family: &amp;quot;courier new&amp;quot; , &amp;quot;courier&amp;quot; , monospace;&quot;&gt;tuple_storage&lt;/span&gt; that gives proxy semantics for tuple-based storage. This is left as an exercise for the reader.) &lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
A C++11 &lt;a href=&quot;https://www.dropbox.com/s/b64983xsf10842y/dod.cpp?dl=0&quot;&gt;example program&lt;/a&gt; is provided that puts to use the ideas we have presented.&lt;/div&gt;
&lt;div style=&quot;text-align: left;&quot;&gt;
Traversal is at the core of DOD, as the paradigm is oriented towards handling large numbers of like objects. In a later entry we will extend this framework to provide for easy object traversal and measure the resulting performance as compared with OOP.&lt;/div&gt;
</content><link rel='replies' type='application/atom+xml' href='http://bannalia.blogspot.com/feeds/8894064543265654503/comments/default' title='Post Comments'/><link rel='replies' type='text/html' href='http://bannalia.blogspot.com/2015/08/c-encapsulation-for-data-oriented-design.html#comment-form' title='4 Comments'/><link rel='edit' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8894064543265654503'/><link rel='self' type='application/atom+xml' href='http://www.blogger.com/feeds/2715968472735546962/posts/default/8894064543265654503'/><link rel='alternate' type='text/html' href='http://bannalia.blogspot.com/2015/08/c-encapsulation-for-data-oriented-design.html' title='C++ encapsulation for Data-Oriented Design'/><author><name>Joaquín M López Muñoz</name><uri>http://www.blogger.com/profile/08579853272674211100</uri><email>noreply@blogger.com</email><gd:image rel='http://schemas.google.com/g/2005#thumbnail' width='30' height='32' src='//blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhsQELdCKvvbAJlE3-wlqRubk6kDd-foQA2azxQcXT1PAYa222znXr5fl2nul3qAOqpwndAzsZYVPQlSV4bKRwBs9oR4fuj5C3MGsU-VvlnE62tx6wNdLCfu1YJlZV4vo4/s220/joaquin.jpg'/></author><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt68uBCZZyMatk9BeUYH8OphLRNT-jPaRXuxEsaGKIRpG0dTNK-d7PZnFwnhEy1gTvqYiH83FlFS0TdUGMEt6DdQnrUm1ST8EP7F6wQfb4OoawNQZSt_DjoQbrsuUcvOdWf2f1QAHW4Pk/s72-c/particles.png" height="72" width="72"/><thr:total>4</thr:total></entry></feed>