tag:blogger.com,1999:blog-7103837088726701212018-12-06T12:53:55.721-08:00Visualizing High Dimensional DataJames X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.comBlogger72125tag:blogger.com,1999:blog-710383708872670121.post-40385856972860182432017-06-13T13:38:00.000-07:002017-06-13T13:38:25.893-07:00On Strip Overfitting of Multidimensional Regression<div dir="ltr" style="text-align: left;" trbidi="on">Overfitting is a key characteristic for regression style data modeling with deep neural networks. Whereas considerable effort have been devoted to finding methods to reduce overfitting, little have been done to address the question how to measure overfitting. In most cases, overfitting is measured by <i>prediction error</i> with respect to a pre-selected test data set. Such method has led to various, more or less, successful overfitting-reducing techniques.<br /><br />There are, however, some open questions relating to the rather ad hoc method to measure overfitting. Firstly, the method requires that the test set does not participate in the training process. But most learning process checks repeatedly with the test data set to find optimal models or meta-parameters. In this way, the test data set information "leaks" in to the training process and compromises the separation between training and testing data set. Secondly, there are no clear guidelines to choose the test data set. The method implicitly assumes that the randomly selected test data set will reflect the intrinsic statistics of data model, such assumption can be far from being true in the practice. Thirdly, many problem encountered in the practice have rather limited data, reserving part of the data as test data set might significantly reduce the size of the training data set, and therefore reduce accuracy of regression.model.<br /><br />In this note I'll present a simple way to quantify overfitting of regression style neural networks. The main idea of the method is that it generates test data by <i>linear interpolation</i>, instead of choosing test data set from training data. The following diagram illustrated in this method:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-fAmibeg9Z3E/WT_7E_K97gI/AAAAAAAAw3o/bT9vXgt0mLscReYWRDJOJzabBXM_WVqdQCLcB/s1600/StripOverfitting.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="448" data-original-width="372" src="https://2.bp.blogspot.com/-fAmibeg9Z3E/WT_7E_K97gI/AAAAAAAAw3o/bT9vXgt0mLscReYWRDJOJzabBXM_WVqdQCLcB/s1600/StripOverfitting.png" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><b>Strip Overfitting with respect to point A and B.</b></div><br />The above diagram shows a neural network involved in learning a map from high dimensional data set to the 2-dimensional plane. After completed the learning process, the network maps the two data points labeled A and B to A' and B', respectively. In order to measure the overfitting, we choose 3 evenly distributed linear interpolation points between A and B; and calculate their targets in the 2 dimensional plane with the learned network. Whereas A, P<sub>1</sub>, P<sub>2</sub>, P<sub>3</sub>, B form a straight line, the points A', P'<sub>1</sub>, P'<sub>2</sub>, P'<sub>3</sub>, B' normally don't form a straight line. We then define the <i>strip overfitting</i> w.r.t. A and B by the size of the strip induced by the points A', P'<sub>1</sub>, P'<sub>2</sub>, P'<sub>3</sub><i>, </i>B', as depicted in above diagram as the shadow area.<br /><br />Based on the strip overfitting for a pair of data points, we can then estimate the overfitting for the whole training data set by the average strip overfitting for a fixed number of randomly selected data point pairs.<br /><br />It should be noted that the number of interpolation points provides a way to control the scaling of the overfitting: the more interpolation points we choose, the finer the strip and the smaller the strip overfitting. When the target space is a one-dimensional space, the strip will collapses to a straight line and always have size zero. In order to accommodate such special case, we append an index component to each value in the series A', P'<sub>1</sub>, P'<sub>2</sub>, P'<sub>3</sub><i>, </i>B', so that we get a series of two dimensional vectors (A',0), (P'<sub>1</sub>,1), (P'<sub>2</sub>,2), (P'<sub>3</sub><i>,</i> 3), (B', 4); and we define its strip size as the strip overfitting for the one-dimensional target space.<br /><br />It should be noted that the strip overfitting also becomes zero, if the mapping induced by the network preserves straight lines between the data points. That means, linear regression would be considered as kind of ideal regression with zero overfitting. Since most data models in the practice are non-linear, zero strip overfitting won't be achieved globally for all data point pairs.<br /><br />The interpolation technique also provides a direct way to visualize data generalization. In order to see how a neural network generalizes, we can choose two data points and calculate a series of interpolation points between them; and then plot their trace mapped in the target space by the network. As samples, the following two pictures show the traces of three pairs produced by two differently trained neural networks: both networks have the same structure and are trained with the same algorithm, the only difference is that the network that produces the left picture has used the <a href="https://en.wikipedia.org/wiki/Dropout_(neural_networks)">dropout regularization</a> technique to reduce overfitting. We see clearly that the three interpolation traces on the left side are closer to straight lines. Numerical calculation also verifies that the network for the map on the left side has smaller strip overfitting. This example indicates that strip overfitting as defined in this note may be suitable to characterize overfitting phenomena.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-iyVgazqrBvk/WT9QwCppXAI/AAAAAAAAw3U/Zti0Ay9Jxm4l9ZQXlDB698snbGF-2SJTACLcB/s1600/OverfittingA.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="226" data-original-width="783" height="184" src="https://4.bp.blogspot.com/-iyVgazqrBvk/WT9QwCppXAI/AAAAAAAAw3U/Zti0Ay9Jxm4l9ZQXlDB698snbGF-2SJTACLcB/s640/OverfittingA.jpg" width="640" /></a></div><br />The next two pictures show more examples of interpolation traces that visualizes the effect of dropout regularization technique. These two pictures are produced by the same two networks used before with the exception that here more data point pairs have been chosen and the training data points are not shown for the sack of clarity. We see that the dropout regularization makes the interpolation traces smoother and more homogeneous.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-Lv0z1d6rqEE/WT9Qx4Omb4I/AAAAAAAAw3Y/4ZRsc8PeTQofhrz0w-5YpQ-WI_gIwbUGACLcB/s1600/OverfittingB.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="272" data-original-width="794" height="218" src="https://1.bp.blogspot.com/-Lv0z1d6rqEE/WT9Qx4Omb4I/AAAAAAAAw3Y/4ZRsc8PeTQofhrz0w-5YpQ-WI_gIwbUGACLcB/s640/OverfittingB.jpg" width="640" /></a></div><br /><br /><b>Discussion</b><br /><b><br /></b>This note has introduced the concept <i>strip overfitting</i> to quantify overfitting in multidimensional regression models. The method basically quantifies overfitting by the "distance" between a regression model and the linear interpolation. We have demonstrated with examples that the strip overfitting may be suitable to measure and compare the effect of regularization techniques like dropout technique. The strip overfitting offers an interesting alternative tool to quantify and visualize overfitting in neural networks for multidimensional data modeling.<br /><br /><br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-54458902224129758732017-06-03T09:34:00.002-07:002017-06-03T09:34:37.151-07:00Deep Line Chart for Big Data<div dir="ltr" style="text-align: left;" trbidi="on">Line chart is a popular technique to visualize trend among sequential data. Whereas it has been very a useful tool in general, it has a often neglected problem for large data sets: most conventional software library draw curves sequentially in certain order, so that the early drawn curves will be covered by curves drawn afterwards. This means that the appearance of the line chart depends on the drawing order, that is often random in the practices. The following animated picture illustrates a case where yellow curves get covered by blue curves. This drawback would especially become more apparent for large data set that comprises many high density curves.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-h9xovRgFwYo/WTG5SEotZjI/AAAAAAAAwz4/4_HU3MTSqFg68ymtaxOAd508gNO-s0BMACLcB/s1600/DrawLineChart.gif" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="471" data-original-width="500" height="301" src="https://1.bp.blogspot.com/-h9xovRgFwYo/WTG5SEotZjI/AAAAAAAAwz4/4_HU3MTSqFg68ymtaxOAd508gNO-s0BMACLcB/s320/DrawLineChart.gif" width="320" /></a></div><br /><br />In this note I'll present a new line chart method that avoids the above mentioned problem. Before we describe the new drawing method, let's recall a simplified version of the conventional way to draw line chart:<br /><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><b> Input</b>: A set of line segments each with 2 ending points and a color.</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><b> Output</b>: A bitmap P with a color P<sub>xy</sub> at each coordinator (x, y).</span><span style="font-family: "courier new", courier, monospace; font-size: x-small;"> </span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> Initialize P with the background color.</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> for each line segment L</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> Let (r, g, b) be the color for L</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> for each pixel coordinator (x,y) on L</span><br /><span style="font-size: x-small;"><span style="font-family: "courier new" , "courier" , monospace;"> Set P<sub>xy</sub>:=(r,g,b)</span></span><br /><br />The above algorithm is a single pass drawing method where each pixel color of a line segment is directly painted on the bitmap. With this method, painted lines on the bitmap may potentially be painted over by successive lines.<br /><br />The new method is a two-pass algorithm: it first sum up all pixel colors in an intermediate matrix; then convert the matrix into a bitmap. It works roughly as follows:<br /><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><b> Input</b>: A set of line segments each with 2 ending points and a color.</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"><b> Output</b>: A bitmap P with a color P<sub>xy</sub> at each coordinator (x, y).</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> Initialize a matrix M with the same dimension as P, but </span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> each component of M holds an integer vector M<sub>xy</sub>=(r,g,b,n).</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> for each line segment L</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> Let (r, g, b) be the color for L</span><br /><span style="font-family: "courier new" , "courier" , monospace; font-size: x-small;"> for each pixel coordinator (x,y) on L</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"><span style="font-size: xx-small;"><span style="font-family: "courier new" , "courier" , monospace;"> Set M<sub>xy</sub> := </span></span><span style="font-family: "courier new", courier, monospace; font-size: xx-small;">M</span><sub style="font-family: "courier new", courier, monospace;">xy</sub> + (r, g, b, 1)</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> for each coordinator (x,y) in M</span><br /><span style="font-family: Courier New, Courier, monospace; font-size: x-small;"> Set </span><span style="font-family: "courier new", courier, monospace; font-size: x-small;">P</span><sub style="font-family: "courier new", courier, monospace;">xy </sub><span style="font-family: "Courier New", Courier, monospace; font-size: x-small;">:= (r/n, g/n, b/n), where (r,g,b,n) is the vector </span><span style="font-family: "Courier New", Courier, monospace; font-size: x-small;">M</span><sub style="font-family: "courier new", courier, monospace;">xy</sub><br /><br />Compared to the previous method, the key change of the new method is the use of an intermediate matrix : the matrix sums up all colors from all lines and counts how many times a pixel has been painted. In the second pass, the matrix is converted to bitmap by taking the "average" color for each pixel. It should be pointed out here that, for an actual implementation, the data type for matrix components should be all 32 bits integers, so that no data overflow occurs when summing up the colors.<br /><br />The following picture shows a line chart created with the new line chart algorithm for the same data used in previous picture above:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-aXM_uIokjiE/WTLdS72ZkWI/AAAAAAAAw1I/K3DUV-4kN5Q2mRg4K0udcb4TaxlkM8vRgCLcB/s1600/DeepLineChart.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="474" data-original-width="508" height="298" src="https://1.bp.blogspot.com/-aXM_uIokjiE/WTLdS72ZkWI/AAAAAAAAw1I/K3DUV-4kN5Q2mRg4K0udcb4TaxlkM8vRgCLcB/s320/DeepLineChart.jpg" width="320" /></a></div><br />We see in above picture that the group of yellow curves are clearly visible. In general, the new line chart will reveal groups of curve, as long as the group is statistically significant with respect to the complete data set. The conventional line chart, on the other side, only shows curves on the "surface" of the curve bundle.<br /><br />The new line chart algorithm has another significant advantage over the conventional line chart: since the algorithm does not mandate particular order to update the matrix and bitmaps, it can be efficiently parallelized on modern GPU hardware and achieve very good performance. The implementation included in VisuMap version 5.0.930 is, for a system with a GTX 960 card (that has ca. 2000 GPU cores), more than two orders of magnitude faster than the conventional line chart.<br /><br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-23806567551855335702017-04-21T10:24:00.001-07:002017-04-25T08:32:36.817-07:00Deep Data Profile with VisuMap <div dir="ltr" style="text-align: left;" trbidi="on">Data profiling are normally understood as statistical methods to extract numerical features from complex systems for easy exploration. So for instance, GDP, CPI and various kind of indices are often used to profile the state of an economy. Appropriate profiling helps us to compare similar systems; or a system in different development phases. In this note I'll put forward a generic method to profile high dimensional data; the method combines dimensionality reduction algorithms with deep artificial neural networks.<br /><br />In recent years, many so called <i><a href="https://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction">nonlinear dimensionality reduction</a></i> (NDR) methods have been developed to visualize high dimensional complex data. Those methods often use machine learning algorithm to produce 2D or 3D maps; which provide a kind of graphic profiles about data. For instance the following pictures shows a 2D map made from a data set from flow cytometry study:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-QQXQMMjSdMw/WPeN6r7nDXI/AAAAAAAAwaY/Mt376iSvv8opkYXYWmPKHBicHbUy02yGACLcB/s1600/MapA.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="148" src="https://2.bp.blogspot.com/-QQXQMMjSdMw/WPeN6r7nDXI/AAAAAAAAwaY/Mt376iSvv8opkYXYWmPKHBicHbUy02yGACLcB/s640/MapA.jpg" width="640" /></a></div><br />Above map was made with the <a href="https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding">tSNE algorithm</a> from a dataset that contains about 6000 data points each with 12 variables; The colors for the sub-clusters are added with help of the <a href="https://en.wikipedia.org/wiki/Affinity_propagation">affinity propagation clustering algorithm</a>. The colored map is pretty helpful to discern interesting structure within the data set. Unlike those statistics based profiling, the visual map based profiling does not rely on high level features like GPD ratio; which in general require a good understanding about the data and underlying system.<br /><br />Nevertheless, those visual map based methods lack the predication capability in the following sense: in the practice, we often have multiple data sets collected about similar systems, or the same system in different phases. For instance, in clinic trials our data sets might be gene expression profiles of patients in different trial stages. In these cases, we are especially interested in differences between the profiles. Most of these NDR methods are however <i>insensitive </i>to small changes, so that it is hard to recognize differences between NDR maps from similar data sets.<br /><br />To address above mentioned problem, we purpose here the <i>deep data profiling </i>(DDP) procedure as an extension to NDR based profiling as illustrated in the following diagram:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-ADHyrJsEI8M/WPe3-h4pJtI/AAAAAAAAwaw/pmMmsn924hQBl1v3H5ylS3dAulkOmS3QgCLcB/s1600/MapB.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="433" src="https://4.bp.blogspot.com/-ADHyrJsEI8M/WPe3-h4pJtI/AAAAAAAAwaw/pmMmsn924hQBl1v3H5ylS3dAulkOmS3QgCLcB/s640/MapB.jpg" width="640" /></a></div><div style="text-align: center;"><b>Deep Data Profiling</b> </div><br />The DDP procedure starts with a set similar data sets. As first step we choose a reference data set and apply NDR on the data set to obtain a 2D or 3D map. Many NDR methods are available for this purpose, for this note we recommend the tSNE algorithm, as it is widely available and produces relatively good structured map for a wide range of data. Then, we apply a clustering algorithm on the map to produce labels for the sub-clusters on the map. Those labels are then used as different colors for the sub-clusters, so that we get a colored map as illustrated in above picture.<br />There are many clustering algorithms suitable for this purpose, for this test we used the <i>affinity propagation </i>algorithm together with some manual adjustment directly performed on the map.<br /><br />The colored map we obtained from the reference data set represents knowledge we captured from the reference data. As next step we then use a machine learning algorithm to learn such knowledge. In particular, we employ <i>multilayer feed forwards networks </i>to learn the translation from reference data to the colored map. Two networks will be used for this purpose: one to learn the dimensionality reduction function; and the other one to learn the clustering function.<br /><br />The two trained networks can then be applied to other data sets to produce colored maps as their visual profiles.<br /><br />A plugin module, called Data Modeling, has been implemented to integrate DDP service with VisuMap version 5.0.928. The plugin module internally uses the <a href="https://www.tensorflow.org/">TensorFlow</a> engine from Google Inc. to implement feed-forward neural networks. The plugin offers GUI and scripting interface to train and apply models with data directly from VisuMap. The following screen-cast demonstrates a simple use case of DDP with VisuMap:<br /><br /><iframe allowfullscreen="" frameborder="0" height="480" src="https://www.youtube.com/embed/bMG5Yy_YfcI" width="854"></iframe> <br /><br />Deep data profiling procedure presented here offers a visual profiling technique for high dimensional complex data. Compared to conventional statistics based profiling techniques, DDP is more intuitive and information richer. As an meta-algorithm, DDP can take advantage of new algorithms for NDR, clustering and machine learning. With the rapid development of machine learning technologies, DDR might offer powerful and versatile tools to explore high dimensional complex systems.<br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-46172346919308348752017-02-06T09:15:00.000-08:002017-02-06T09:15:53.965-08:00On the shape of nucleotide motifs<div dir="ltr" style="text-align: left;" trbidi="on">Genomes of multi-cellular organism often contain <i>short sequences</i> with many duplicates which have been created and preserved with minor mutations during their evolutionary processes. Those nucleotide sequences, called here generically <i>motifs</i>, carry important information about the structure and history of those genomes. Better understanding of those motifs could help us to explore the function and evolutionary path of genetic code.<br /><br />In this note I'll put forward a method to visualize motifs as 3D dotted maps, where each motif is represented by a dot in the 3D space; and sub-groups of motifs may take different shapes, which I'll try to interpret in terms of evolutionary variations.<br /><br />I have put forward some methods to visualize DNA sequence a <a href="http://jamesxli.blogspot.ca/2014/09/on-geometric-modeling-of-sequential-data.html">while ago</a>. In those early attempts, different <i>scanning machines</i> were used to extract high dimensional vectors from DNA sequences; which are then mapped into 3D space with <i>multidimensional scaling</i> algorithm. A main problem of those methods is that they are often not very sensitive towards sequential variations, so that the resulting maps lack capability to reveal structural information. The method presented here basically follows the same framework, except that we'll not use scanning machine, but feed the motif sequences as data points directly into the multidimensional scaling algorithm which uses the <i>neadleman-wunsch </i>algorithm to calculate the <i>editing distance</i> between the motifs.<br /><br />In the following I'll use the chromosome I of Caenorhabditis Elegans as an example to detail the steps of the visualization process.<br /><br />Firstly, I downloaded the sequence of chromosome I of C. Elegans from the NCBI site (<a href="https://www.ncbi.nlm.nih.gov/nuccore/NC_003279.8">NC_003279</a>) I then created an blast database for the chromosome with the <span style="font-family: "courier new" , "courier" , monospace;">makeblastdb</span> program; then I used the <span style="font-family: "courier new" , "courier" , monospace;">blastn</span><span style="font-family: "times" , "times new roman" , serif;"> </span>program to query on all sub-sequences of length 30 nucleotide bases; A sub-sequences will be recorded as motif, if it has more than 30 duplicates on either strand of the chromosome, where a fuzzy matching with one or two mutation or deletion/insertion are employed. In this way 4578 motifs have been found among the chromosome I that has length of about 15 million nucleotide bases. The following picture shows the distribution of these motifs on the chromosome:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-kMnxKpvLKfs/WIpp-rvhnrI/AAAAAAAAvGA/s521v9UqfV4K4wcbGj8YzY4a41WNvpXLACLcB/s1600/ChrIMotifs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="433" src="https://4.bp.blogspot.com/-kMnxKpvLKfs/WIpp-rvhnrI/AAAAAAAAvGA/s521v9UqfV4K4wcbGj8YzY4a41WNvpXLACLcB/s640/ChrIMotifs.png" width="640" /></a></div><br />In above picture nucleoside types A, C, G, T are shown by color yellow, magenta, red and cyan respectively; The locations of motifs are marked by blue bars. We notice that many motifs overlap each other. Those overlapping motifs are merged together to form the 4578 disjunctive sequences of different length as be shown in following table:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-B90JgXWapzM/WIqQT_YAHlI/AAAAAAAAvGc/nh9h9DUfPkEFGcnP77O4F4Son-UumeO0wCLcB/s1600/CElegansMotifDataset.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="https://4.bp.blogspot.com/-B90JgXWapzM/WIqQT_YAHlI/AAAAAAAAvGc/nh9h9DUfPkEFGcnP77O4F4Son-UumeO0wCLcB/s400/CElegansMotifDataset.png" width="400" /></a></div><br /><br />I then apply the <a href="https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding">t-SNE</a> algorithm on these 4578 motifs to create a 3D map. I used <a href="https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch_algorithm">needleman-wunsch</a> algorithm to calculate the minimal editing distance between the motif sequences. The needleman-wunsch algorithm uses a simple penalty schema of (2, 0, 1) for gape, match and mismatch respectively. The following picture shows the resulting 3D dotted map:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-BBQxCJXaN6Y/WIqRxN0R37I/AAAAAAAAvGo/5zX3jw7jNTgmqblnWN0IDuRWoAtgmueLACLcB/s1600/tSNEMotifs.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="326" src="https://3.bp.blogspot.com/-BBQxCJXaN6Y/WIqRxN0R37I/AAAAAAAAvGo/5zX3jw7jNTgmqblnWN0IDuRWoAtgmueLACLcB/s400/tSNEMotifs.png" width="400" /></a></div><br /><br />Then a clustering algorithm (a <a href="http://jamesxli.blogspot.ca/2012/03/on-mean-shift-and-k-means-clustering.html">combination of k-mean algorithm and mean-shifting algorithm</a>) is applied to the 3D map that results in coloring of major clusters with different colors. With some minor manual color adjustment, we obtained a colored 3D dotted map as shown in the following animation:<br /><br /><center><iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="https://www.youtube.com/embed/uJC9v2fHoyk?hd=1&vq=hd720" width="545"></iframe></center><br /><div class="separator" style="clear: both; text-align: center;"><br /></div>From the above map we can roughly find clusters of the following shapes:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-0y0WzpPNg1U/WJPVJc1GPFI/AAAAAAAAvdQ/0mECa0FjpmEdtzSmERZyWF-cWrxZ8VeDgCLcB/s1600/ShapeList.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="168" src="https://3.bp.blogspot.com/-0y0WzpPNg1U/WJPVJc1GPFI/AAAAAAAAvdQ/0mECa0FjpmEdtzSmERZyWF-cWrxZ8VeDgCLcB/s640/ShapeList.png" width="640" /></a></div><br />Assuming that all motifs are created by duplication and random single nucleotide mutations, we could interpret above shapes as following: Motifs form shape of small blobs indicate those motifs originated from a common ancestor and have undergo minor mutations. Those motifs could be either recent duplicates, so that they don't have time to accumulate large number of mutations. Or, if they exist for long time, their mutation with large number of mutation did not survive the evolutionary selection process.<br /><br />Motifs form the shape of curves indicate that those motifs, originating from a common ancestor, have mutated gradually away from its initial copy. With help of VisuMap software we can locate those motifs in the chromosome and trace their evolutionary process along the curve ( but we cannot say which end of the curve is the starting end.) The following pictures show a case with a curved motif collection:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-ZhLERs6o4mo/WJTacCciSQI/AAAAAAAAvds/-33abVbxbsA1kOIUTQ6rRtIhKl8PtHXwQCLcB/s1600/CurveCluster.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="https://3.bp.blogspot.com/-ZhLERs6o4mo/WJTacCciSQI/AAAAAAAAvds/-33abVbxbsA1kOIUTQ6rRtIhKl8PtHXwQCLcB/s400/CurveCluster.png" width="306" /></a></div>In above pictures, the top picture shows a section of the 3D motif map; the second picture shows corresponding locations of those motifs on the chromosome. With VisuMap you can interactively move the selection of motifs in the window displaying the first picture, so that the second picture will mark the location of selected motifs. The last picture shows the nucleotide sequences of these motifs, one motif per row, in the order along the curve from left to right. We can see that these motifs gradually shifted their phases.<br /><br />We notice some curved motif groups have small blobs attached to them. These blobs might indicate short bust of different mutations.<br /><br />Interestingly, we see two motif clusters in the 3D map which form roughly two parallel rings. Those motifs are distributed across the whole chromosome, they may have some internal relationship, but I'm not sure in moment how to interpret them in terms evolutionary process.<br /><br />In addition to C. Elegans, I also applied the visualization algorithm to other organisms. The following short video clips shows several motif maps of the rice and human genome:<br /><br /><center><iframe allowfullscreen="allowfullscreen" frameborder="0" height="480" src="https://www.youtube.com/embed/YNM8S1Um1Rk?hd=1&vq=hd480" width="545"></iframe></center><br />It should be noted that for the human chromosomes, calculating the motif map with the current implementation for all motifs would be computationally prohibitive. So, I used a simplified version of blastn; and selected three small sub-sets, each with about 20 000 motifs, for above motif maps. The three sub-sets have increasing duplicate frequencies. So, for instance, motifs of the third set all have over 10 000 duplicates; whereas the first set has motifs with 300 to 500 duplicates.<br /><br />From above maps we can see that the motifs of rice genome are much less structured: almost all of them are just small blobs. Those motifs are, unlike the case with C. Elegans, mostly located in coding area. On the other side, the motif map of the human genome shows diverse structures depending on the duplicate frequencies; and the third motif map evens shows some kind of 2-dimensional structure.<br /><br /><b>Discussion</b><br /><b><br /></b>The visualization method presented here provides a simple way to extract geometrical models from pure discrete genetic sequence information. These geometrical shape provide intuitive information about the evolutionary path of genetic code; and they might be helpful to profile and explore whole genome structures.<br /><br />The method presented here has three key components: the way to find motifs; the way to calculate editing distance and the multidimensional scaling algorithm (MDS). Each of these three methods has significant impact on the results and they also pose significant limitations. For instance, the current implementation of these components are quite slow, so that it is difficult to visualize large motif sets. I have experimented with other editing distance, like the smith-waterman and levensthein method, they seem to perform similarly, needleman-wunsch just produces slightly better visualization. For the future investigation we might work on finding more efficient MDS algorithm or finding more efficient distance metric to help us to attack much larger and different groups of genetic code.<br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-44347464678224752552016-06-21T13:33:00.000-07:002016-06-21T13:33:46.162-07:00VisuMap version 4.7 with Whole Genome Browser Released.<div dir="ltr" style="text-align: left;" trbidi="on">I have just released VisuMap version 4.7 on our <a href="http://www.visumap.com/index.aspx?p=Products">web site</a>. Apart from many GPU related optimizations of 3D data visualization, this release includes a heat map style data view to visualize long discrete sequences. A whole genome with billions of nucleotides can be aggregated and displayed in a single 2D map. The following screen snapshot shows, for example, the exome distribution of the first human chromosome:<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-NeGxWFJrm3c/V2mhTPJVDyI/AAAAAAAArvk/s_V1H85293Ma6XzePTfar0rM-U_DMJBHQCLcB/s1600/Chromsome1Human.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="176" src="https://4.bp.blogspot.com/-NeGxWFJrm3c/V2mhTPJVDyI/AAAAAAAArvk/s_V1H85293Ma6XzePTfar0rM-U_DMJBHQCLcB/s640/Chromsome1Human.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">We notice that most genome browsers current available are depth oriented: their exploration are normally focused on relatively small sections of the genome. The whole genome browser in VisuMap offers a new way to help people to explore genome wide patterns and profiling. </div></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-72118496684975408302015-12-09T11:09:00.002-08:002015-12-10T13:21:33.539-08:00Visualizing Gene Sequences with complementary GMS<div dir="ltr" style="text-align: left;" trbidi="on">We now know that there are about 20 000 protein coding genes in the whole human genome. How do they relate to each other? Is there any geometrical structure among them? What shape do they form? Aiming at these questions, I'll put forward here a method to generate scatter plot from gene sequences that shows the shape of a gene collection.<br /><br />In recently blogs, I have described <a href="http://jamesxli.blogspot.ca/2015/09/curvature-as-secondary-profiling-of-gms.html">GMS framework</a> to profile gene sequences which translates discrete sequence to high dimensional curvature vectors. We could use this method to convert a collection of gene sequences to a set of high dimensional vectors and then apply MDS algorithm to embed them in a low dimensional space. However, if we restrict our focus on protein coding sequence (CDS), the length of CDS varies from few hundreds to tens of thousands nucleotides. That means, the profiling curvature vectors for CDS sequences are of very different dimensions. This in general causes difficulties for most MDS algorithms as they normally accept only vectors of a common dimension.<br /><br />In order to obtain profiling vectors of the same length from sequences of variable length, we extended our GMS framework as follows: we first select a short <i>reference sequence</i>; Then for each CDS sequence, we mix the reference sequence with the CDS sequence and pass them together to the GMS framework. After the GMS framework embedded the two sequence into the low dimensional space, we then calculate the curvature profile of the curve of the reference sequence, and use it as a profiling vector for the CDS sequence. The modified framework is depicted as follows:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/--uCNtI46Rn4/VlkBN2dlumI/AAAAAAAAns0/CTVt5D0X7bw/s1600/cGMSProfiling.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"></a><a href="http://4.bp.blogspot.com/-82KZThee1bE/Vmd9GK9-Q2I/AAAAAAAAnuc/gdjU01NKLF0/s1600/cGMSProfilingB.png" imageanchor="1"><img border="0" height="472" src="http://4.bp.blogspot.com/-82KZThee1bE/Vmd9GK9-Q2I/AAAAAAAAnuc/gdjU01NKLF0/s640/cGMSProfilingB.png" width="640" /></a></div>We notice that the reference sequence and the CDS sequence are embedded together in the same space, so that their curves occupy <i>complementary area </i>of the low dimensional space. That means the curvature profile of the reference sequence carries information about the <i>complementary </i>space of the CDS sequence. More generally, we can expect that different CDS sequences will produce different <i>complementary profile</i> as they have different impact on the reference sequence.<br /><br />For this note I have downloaded the CDS sequences of the first chromosome of the human genome from the <a href="http://ensembl.org/">Ensembl.org</a> web site; There are about 2000 protein coding genes whose length very from few hundreds to more than 26 thousands nucleotides. As reference sequence, I picked a short CDS sequence of CDS collection (the gene ENSG00000187170) that has only 300 nucleotides. The following picture shows the complementary curvature profile the CDS sequences: each row of the heatmap shows the curvature of the reference sequence mixed with one CDS sequence; bright colors indicate high curvatures. The line chart under the heatmap shows the complementary profile of 4 selected CDS sequences marked in the heamap<br /><a href="http://4.bp.blogspot.com/-Ae9MVfBpbTA/Vmd5sx7jE-I/AAAAAAAAnuQ/OSvHkGhyXfk/s1600/cGMSProfilingA.png" imageanchor="1"><img border="0" src="http://4.bp.blogspot.com/-Ae9MVfBpbTA/Vmd5sx7jE-I/AAAAAAAAnuQ/OSvHkGhyXfk/s1600/cGMSProfilingA.png" /></a><br /><br />We see that these profiles at large have high curvatures at about half dozens regions, but the height and exact location of those high curvature peaks depend on the corresponding CDS sequence. It is important to notice that I have in the previous blog demonstrated that the curvature profile is a characteristics that is invariant from the random nature of the mapping algorithm. Thus, those small variation are not due to the random nature of the mapping algorithm used, they ought to be attributed to different interaction (or "affinity") between the CDS sequence and the reference sequence.<br /><br />Having obtained one complementary curvature vector for each CDS sequence, we can then apply a MDS algorithm on these high dimensional vectors to embed them in low dimensional space. For this purpose, I have used the <a href="https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding">tSNE </a>algorithm. The following picture shows a 3D map for these CDS sequences:<br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://youtu.be/h-UEB6ZgJhc"><img border="0" src="http://4.bp.blogspot.com/-Akx05NPZaFw/VmhlmUJwQ4I/AAAAAAAAnvU/i7VF4ipkbHA/s1600/cGMSProfilingC.png" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://youtu.be/h-UEB6ZgJhc">Curvature Profile Map of 2045 CDS sequences created with tSNE algorithm</a></td><td class="tr-caption" style="text-align: center;"><a href="https://youtu.be/h-UEB6ZgJhc"><br /></a></td><td class="tr-caption" style="text-align: center;"><a href="https://youtu.be/h-UEB6ZgJhc"><br /></a></td><td class="tr-caption" style="text-align: center;"><a href="https://youtu.be/h-UEB6ZgJhc"><br /></a></td></tr></tbody></table>The above map is linked to a short video, you can click on the map to see the 3D map in animation. Also, for the purpose of easy exploration I have added different colors to the different regions on the map with the k-Mean clustering algorithm.<br /><br />It should be noticed that the selection of the reference sequence has essential impact on the curvature profile and the final map. To see this, I have picked another sequence, the CDS sequence of gene with id ENSG00000158481 (with about 1000 nucleotides,) as reference sequence and repeated the whole procedure. The following picture shows the corresponding complementary curvature profile and the tSNE map of the 2045 CDS sequences:<br /><br /><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"><tbody><tr><td style="text-align: center;"><a href="https://youtu.be/AdDONb7Kmvw" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://4.bp.blogspot.com/-IC6jzsIQmr0/Vmhpl92ys7I/AAAAAAAAnvg/oHgUilmxWlE/s640/cGMSProfilingD.png" width="606" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;"><a href="https://youtu.be/AdDONb7Kmvw">Curvature Profile and tSNE map of the 2045 CDS sequences with ENSG00000158481 as reference sequence</a></td></tr></tbody></table>We see that both the curvature profile and the final map are very different from those created previously with a different reference sequence.<br /><br />It is not yet clear how the reference sequence impacts the complementary curvatures. Intuitively, the reference sequence plays a role like the a projection plane that we often use to project high dimensional data to 2-dimensional space for visualization purpose. Thus, selecting an appropriate reference sequence might help us to see interesting geometrical structure among large collection of gene sequences.<br /><br /><b>About the sample data</b><br /><br />The sample genome data is downloaded from <a href="http://ensembl.org/">Ensembl.org</a> web site in fasta format. The fasta data file is imported into VisuMap with a special plugin. The zipped file <a href="http://www.visumap.com/Resources/Datasets/CGmsProfiling.zip">here </a>contains the VisuMap sample file, the plugin for importing fasta files and a script "CurvatureProfiling.js" to run the simulations. Before running the simulation, the plugin module GeneticAnalysis.dll need to be installed into VisuMap. <br /><br /><br /><br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-10647547835726789742015-09-19T11:14:00.000-07:002015-10-06T07:51:57.560-07:00Curvature as secondary profiling of GMS maps.<div dir="ltr" style="text-align: left;" trbidi="on">As a profiling method, <a href="http://jamesxli.blogspot.ca/2015/05/gms-for-dna-profiling.html">GMS translates </a>discrete sequences to space curves which help us to explore sequence features (treats) with help of geometrical shapes. Yet, space curves still pose a challenge for direct pattern recognition. This is especially so, when we try to explore patterns among large collection of sequences.<br /><br />In this note I'll put forward a method to use curvature to capture characteristic of GMS space curves. Some experiments with sample data will show that curvature profile are suitable for pattern analysis of large collection of GMS space curves.<br /><br /><b>Curvature of space curves</b><br /><b></b><br />Curvature is a concept introduced in mathematics to measure how far a segment of a geometrical shape deviates from being <i>flat. </i>Intuitively for space curves, the curvature of a curve at a point measures how strong the curve is bent at the point.<br /><br />More formally, Let P<sub>1</sub>, P<sub>2</sub>, ..., be a series of points in the 3 dimensional space representing a space curve, then for the purpose of this note, we use the following simple formula to estimate the curvature, <span style="font-size: large;"><i>κ</i></span><sub>k</sub>, at point P<sub>k</sub>:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-nP0OB2qK5BU/Vd-L2Lkn_pI/AAAAAAAAm7g/qjSL7k89qoc/s1600/CurvatureDef2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-nP0OB2qK5BU/Vd-L2Lkn_pI/AAAAAAAAm7g/qjSL7k89qoc/s1600/CurvatureDef2.png" /></a></div>As shown in above diagram, <i><span style="font-family: "Courier New",Courier,monospace;">α</span></i> is the angle between the two segments P<sub>k</sub>P<sub>k+1</sub> and P<sub>k+1</sub>P<sub>k+2</sub>; <i>s</i> is the length of the segment P<sub>k</sub>P<sub>k+2</sub>. If P<sub>k</sub> are the 3-dimensional coordinate vectors of the points, then the above formula becomes:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-2CSUiAGgso0/Vd-PKdZgUoI/AAAAAAAAm7o/OWIDTNOj5a0/s1600/CurvatureDef3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-2CSUiAGgso0/Vd-PKdZgUoI/AAAAAAAAm7o/OWIDTNOj5a0/s1600/CurvatureDef3.png" /> </a></div><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: left;">Thus, the curvature profile of space curve is a series real number values, <span style="font-size: large;"><i>κ</i></span><sub>k</sub>, that measure how strong the curve is bent at each point. In practice, when we calculate curvature profile of a GMS curve, we don't need to calculate the curvature at each point, but just at points sparsely and evenly sampled from the the GMS curves. For the sake of simplicity, we simply select one point from for each input sequence node. That means, for a sequence with <i>n</i> nodes, we calculate a <i>n</i>-dimensional curvature profile. The following diagram shows the whole work-flow to calculate the curvature profile from a discrete sequence:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-lOz9BsnT3s8/VeSbBfxykCI/AAAAAAAAm8I/huzVDyNZUrQ/s1600/GmsCurvatureProfile.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="510" src="http://1.bp.blogspot.com/-lOz9BsnT3s8/VeSbBfxykCI/AAAAAAAAm8I/huzVDyNZUrQ/s640/GmsCurvatureProfile.png" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Above work-flow is mainly an extension of the one in a <a href="http://jamesxli.blogspot.ca/2015/05/gms-for-dna-profiling.html">previous note</a>, where more detailed explanation is provided. The only modification to the work-flow is an extra step that calculates curvature profile from the GMS curve of the first clone. As will be discussed latter in this note, it has been turned out in the simulation experiments that the clones normally have about the same curvature profile.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;"><b>Characteristics of curvature profile</b></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Curvature profile has several properties which make it suitable to characterize GMS space curves. First, just from its definition we know that the curvature profile is rotation and shifting invariant, so that rotation and shifting information are automatically removed from curvature profile. Because of this running GMS algorithm multiple times on a sequence will results in the same curvature profile, even when the corresponding space curve are rotated and shifted differently due to some random nature within the GMS algorithm.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Going one step further we can change the number of clones in the GMS algorithm. The resulting spaces curves for the clones normally vary systematically in shape and size; Yet their corresponding curvature profiles are more or less similar to each other. The following diagram shows the curvature profiles of 5 clones of sequence with 5 fold cloning:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-0KNYIjWEMkI/VenbonaunKI/AAAAAAAAm9U/aOySjYJGB0M/s1600/Curvature5Fold.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-0KNYIjWEMkI/VenbonaunKI/AAAAAAAAm9U/aOySjYJGB0M/s1600/Curvature5Fold.jpg" /></a></div><div class="separator" style="clear: both; text-align: left;">We notice in above picture that the curvature profiles for the middle or inner clones have noticeably larger peaks than those of the outer clones; and there is symmetry alone the middle (the third) profile. These curvature profiles shows that all 5 space curves bend more or less at common positions and the inner curves bend slightly more than those outer curves. On the right side of above picture are the 5 curvature profiles displayed as a heatmap. As will be seen below, heatmap offers a better way to visualize large collection of curvature profiles.</div><div class="separator" style="clear: both; text-align: left;"><br /></div>Another interesting property of the curvature profile is that they are in general independent on the sampling frequency of the GMS algorithm. High frequency sampling normally results in smoother space curves, but their shape remain mostly unchanged. The following picture shows the curvature profile of another sequence with sampling frequency of 6, 8, 10, 12 and 14 scans-per-node. The 5 curvature profile are almost identical.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-qCA-JWwSzSI/Veniy9PJWXI/AAAAAAAAm9k/mabsmDdQYaM/s1600/CurvatureSampling.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-qCA-JWwSzSI/Veniy9PJWXI/AAAAAAAAm9k/mabsmDdQYaM/s1600/CurvatureSampling.jpg" /></a></div><br /><br /><b>Profiling Transmutation</b><br /><br />In addition to the invariant characteristics, the curvature profiles of similar sequences are in general similar. This property enables us to study systematical variation among large collections of sequences by visualizing their corresponding curvature profiles. We demonstrate this method here with an example that simulates a transmutation process. As the be shown in the following diagram, transmutation is the process that one sequence gradually mutates to another sequence.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-fgl61hBCvgc/VfcIrDYsGmI/AAAAAAAAnPA/C2k-KhwilRg/s1600/Transmutation.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-fgl61hBCvgc/VfcIrDYsGmI/AAAAAAAAnPA/C2k-KhwilRg/s1600/Transmutation.png" /></a></div>For the sake of simplicity we assume that both sequences, A and B, have the same length N. A simple transmutation process is that an increasingly longer sub-sequence A replaces a corresponding sub-section in B. More particularly, the simple transmutation is realized by the sequence T<sub><i>k</i></sub>(A, B) := A[1, <i>k</i>]*B[<i>k</i>+1, N]; where <i>k=</i>1, 2, ..., N; A[1,<i>k</i>] is the sequence of the first k nodes of A; B[K+1, N] is the sequence of the last N-<i>k </i>nodes of B; * is the concatenation operation.<br /><br />With help of GMS algortihm we can calculate the space curves of T<sub><i>k</i></sub>; <i>k</i>=1,2,..., N; and then calculate their curvature profiles as N-dimensional vectors. For this example we have used two sequences of 1377 nodes (one of them is the CCDS of CD4); The following picture shows their 1377 curvature profiles displayed as an single heat map (the three curve charts on the right side are profiles of the transmutation sequence at 3 particular times) :<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-MAfqvwh81b0/VfcNMzUWn7I/AAAAAAAAnPM/EqlpXXV_p7w/s1600/Transmutation2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="319" src="http://1.bp.blogspot.com/-MAfqvwh81b0/VfcNMzUWn7I/AAAAAAAAnPM/EqlpXXV_p7w/s640/Transmutation2.jpg" width="640" /></a></div>In above heat map we can notice a <i>variation region</i> that runs from top left corner to the right bottom corner, this feature indicates that during the transmutation process, only the part of the space curve that is close to the mutated node undergoes significant change. In other words, features of space curve can be traced back to individual nodes in the transmutation sequence.<br /><br />As the curvature profiles are vectors of the same dimension<b> </b>we can use a MDS (multidimensional scaling algorithm) algorithm to map them to a low dimensional space, so that we can see their sharp. The following pictures is a 2-dimensional map generated with the<a href="https://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding"> t-SNE</a> algorithm for those curvature profiles in above example:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-7G4kO-fnsuI/VfcVIkAKjuI/AAAAAAAAnPc/nny_emigrNs/s1600/Transmutation3.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="335" src="http://4.bp.blogspot.com/-7G4kO-fnsuI/VfcVIkAKjuI/AAAAAAAAnPc/nny_emigrNs/s400/Transmutation3.png" width="400" /></a></div><br />In above map, each dot represents the curvature profile of a transmutation sequence. As a property of MDS maps, closely located dots normally represent transmutations with smaller effects on curvatures. Thus, this map can help us to locate sequence regions which cause large change in space curve.<br /><br /><b>Discussion </b> <br /><br />Curvature profile purposed in this note is a secondary characteristics of discrete sequences, it can help us to study patterns and structures among large collection of discrete sequences. Whereas GMS algorithm provides a mean to geometricalize individual sequences; curvature profile offers a tool to geometricalize a set of sequences as a whole. Various invariant properties and visualization experiments discussed in this note seem to validate curvature profile as suitable characteristics for discrete sequences.<br /><br />During the searching for such characteristics I have experimented with several other quantities, like quantities derived from the rotation speed, torsion and velocities. Curvature profile as purposed above seems to have the best properties among all the tested quantities. It should also be pointed out that the curve curvature purposed in this note differs slightly from the <a href="http://Curvature of space curves">standard defintion</a>. The reason for this discrepancy is that the standard mathematical definition is unstable when the sampling points are located in tiny regions with noise. For evenly distributed curve sampling points, the purposed curvature formula approximates the standard definition pretty well.<br /><br />In general, curvature profile together with certain MDS algorithm could provide an interesting tool to explore similarities and patterns among sequential structures and shapes encountered in microbiology.<br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-87675271343520507142015-05-25T16:56:00.000-07:002015-05-25T16:56:27.453-07:00GMS for DNA profiling<div dir="ltr" style="text-align: left;" trbidi="on">In this note I am going to describe a GMS based algorithm to convert DNA sequences to geometrical shapes with visually identifiable features. I'll apply this algorithm to real genetic sequences to demonstrate its profiling capability.<br /><br />The main steps of the <i>DNA profiling algorithm</i> are illustrated as follows:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-PN3Xw4TLeMk/VWNqMDAwbUI/AAAAAAAAk2o/T2mbGoEs97Y/s1600/GmsProfiling.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="518" src="http://2.bp.blogspot.com/-PN3Xw4TLeMk/VWNqMDAwbUI/AAAAAAAAk2o/T2mbGoEs97Y/s640/GmsProfiling.png" width="640" /></a></div><br />As shown in above diagram, a single strand of a duplex nucleotide sequence is taken as the input for the algorithm. The first step of the algorithm is making three identical copies of the sequence, which will then be scanned in parallel by three identical <i>GMS scanning machines</i> which will produce a set of high dimensional vectors. As described in a <a href="http://jamesxli.blogspot.ca/2015/05/gms-with-multiple-scanning-and.html">previous node</a>, the scanning machine works like the <a href="http://en.wikipedia.org/wiki/Ribosome">ribosomal machinery</a>: just instead of proteins it produces high dimensional vectors. As indicated in the diagram, a scanning machine in our algorithm is configured by three parameters: the scanning size K; the moving step size <i>r</i>; and the affinity decay speed λ.<br /><br />Then, as the third step, the <a href="http://jamesxli.blogspot.ca/2014/09/on-affinity-embedding.html"><i>affinity embedding algorithm</i></a> will be applied to the high dimensional vectors to produce a 3D dotted plot. That resulting map will usually contain three clusters corresponding to the three duplicated sequences; and the middle cluster is usually pressed to a quasi 2-dimensional disk. So, as the last step, the middle slice of the 3D map will be extracted, rotated and displayed as a 2D map.<br /><br />In general to qualify as a DNA profiling method, a method should ideally satisfy the following the following requirements:<br /><ol style="text-align: left;"><li>The same sequence or similar sequence should result in similar maps.</li><li>Significant changes in a sequence should lead to noticeably changes in result maps. </li><li>The resulting maps should have structures that can identified by visual examination.</li><li>Be able to associate phenotype traits with geometrical patterns on the result maps.</li></ol>As first example I applied the above algorithm to the <a href="http://www.ncbi.nlm.nih.gov/nuccore/KM233097.1">VP24 gene of zarie ebola virus</a> that consists of 1633 base pairs. The following pictures show 2 maps created by running the algorithm twice with different random initializations:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-9Z5-GfbtPXA/VWN8e8PqlWI/AAAAAAAAk24/3Nb1_ImjhRk/s1600/EbolaVP24.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="288" src="http://3.bp.blogspot.com/-9Z5-GfbtPXA/VWN8e8PqlWI/AAAAAAAAk24/3Nb1_ImjhRk/s640/EbolaVP24.jpg" width="640" /></a></div><br />We can see that above two pictures are very similar in terms of topological features of the curves. The following picture shows two maps of the <a href="http://www.ncbi.nlm.nih.gov/CCDS/CcdsBrowse.cgi?REQUEST=GV&DATA=12464&BUILDS=ALLBUILDS">BRAC1 gene</a> that contains 4875 base pairs. Again, these two maps are topologically quite similar up to fine details.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-xHN-cRkXImg/VWODv8KRZyI/AAAAAAAAk3I/bAENlOISNSo/s1600/Brac1Gene.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="278" src="http://2.bp.blogspot.com/-xHN-cRkXImg/VWODv8KRZyI/AAAAAAAAk3I/bAENlOISNSo/s640/Brac1Gene.jpg" width="640" /></a></div><br />As next example we consider how GMS map changes when we delete, duplicate, or invert a segment of the nucleotide sequence. For this example exons of the <a href="http://www.ncbi.nlm.nih.gov/CCDS/CcdsBrowse.cgi?REQUEST=CCDS&DATA=CCDS8562">gene CD4</a> has been chosen as input. This sequence has 1377 base pairs. I randomly selected a segment of 70 base pairs as a reference segment for deletion, duplication and inversion. The following pictures show the GMS maps of this sequence and the sequence under deletion, duplication and inversion:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-RxL0X0-mlrc/VWOTI0H2AjI/AAAAAAAAk3Y/B7v-6Kr1GGU/s1600/CD4Mutations.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="523" src="http://4.bp.blogspot.com/-RxL0X0-mlrc/VWOTI0H2AjI/AAAAAAAAk3Y/B7v-6Kr1GGU/s640/CD4Mutations.jpg" width="640" /></a></div>In the above picture, the highlighted region correspond to the reference segment under alterations. We can clearly see how these three types of alterations manifested themselves in their GMS maps.<br /><br />Above examples seem to indicate that our algorithm satisfies, more or less, the first 3 requirements listed above; whereas the last requirement remains open for the future study. Since a geometrical model can capture much larger amount of information than conventional statistics/correlations, one might hope some interesting phenotype traits may manifest themselves in those models in a yet-to-find way.<br /><br /><br /><br /><br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-89471391583105592442015-05-12T14:59:00.000-07:002015-05-12T15:03:49.789-07:00GMS with multiple scanning and aggregated affinitity<div dir="ltr" style="text-align: left;" trbidi="on">As said in the title, this note is going to put forward two variations to the GMS model. Both variations aim to create better visualization for discrete sequences.<br /><br />For the first variation, we have seen in a previous note that the <a href="http://jamesxli.blogspot.ca/2015/02/on-loopy-gms-geometrical-modeling-of.html">loopy GMS</a> can produce simpler geometrical shapes when the scanning machine runs multiple rounds over a loop sequence. The reason for the simplification is likely due to the <i>competition for space</i> between the duplicated scanning vectors. This kind of competition can be easily extended to GMS model for serial (no-loopy) sequences by cloning the sequences and scanning machine as illustrated in the following diagram:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-XJok6r7JujI/VVJBHpK49vI/AAAAAAAAkzw/k_2p5EdSKk8/s1600/GMSWithCloning.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-XJok6r7JujI/VVJBHpK49vI/AAAAAAAAkzw/k_2p5EdSKk8/s1600/GMSWithCloning.jpg" /></a></div><br /><br />So with multiple scanning, GMS first clones the sequences into multiple identical copies, then scans each sequence as before to produce scanning vectors with time-stamps. In addition to the time-stamp, an index component <i>p </i>that identifies the different clone sequences is added to the scanning vectors. This index <i>p</i> will be used like the time-stamp to reduce the affinities between scanning vectors from different clone sequences. More precisely, the decay factor for the affinity between two scanning vector produced by p-th and p'-th clone at time t and t' will be changed (see the initial specification for the <a href="http://jamesxli.blogspot.ca/2014/11/geometric-modeling-of-sequential-data.html#AffinityFunction">affinity function</a>) from<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-piBX7ZN0ke0/VVJDh928spI/AAAAAAAAkz8/zYHxdSW1dY8/s1600/DecayFactor1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-piBX7ZN0ke0/VVJDh928spI/AAAAAAAAkz8/zYHxdSW1dY8/s1600/DecayFactor1.jpg" /></a></div>to <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-LU4omWR7_14/VVJDrzwcD4I/AAAAAAAAk0E/MenWB52q5bc/s1600/DecayFactor2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-LU4omWR7_14/VVJDrzwcD4I/AAAAAAAAk0E/MenWB52q5bc/s1600/DecayFactor2.jpg" /></a></div>For the second variation, we notice that the scanning vectors purposed in the initial <a href="http://jamesxli.blogspot.ca/2014/11/geometric-modeling-of-sequential-data.html">note</a> are colored vectors. That means, when calculating the affinity between two scanning vectors, only components with matching color will contribute non-zero affinity to the total affinity. So, as discussed in the <a href="http://jamesxli.blogspot.ca/2014/09/on-geometric-modeling-of-sequential-data.html">initial note</a>, a K dimensional colored vector is mathematically a K×s dimensional spatial vectors, where K is the scanning size, s is the number of colors. Because of such sparsity, the affinity between two scanning vectors are normally very small, and often too small to carry meaningful information over to the GMS maps.<br /><br />In order to increase the affinity between scanning vectors, we aggregate the K-dimensional colored vector to a <i>s-</i>dimensional vector by adding all components of the same color to form a new uncolored vector component. In particular, as depicted in the following diagram, let C = { 1, 2, ..., s } be the set of colors, <i>s<sub>1</sub></i>, <i>s<sub>2</sub></i>,..., <i>s<sub>k</sub></i>∈ C be the colors of the <i>k</i> nodes in the scanning machine; and let <i>α<sub>1</sub>,α<sub>2</sub>, ..., α<sub>k</sub></i> be the corresponding amplitudes for the nodes, then the scanning vector V is a <i>s-</i>dimensional vector (<i>v</i><sub>1</sub>, <i>v</i><sub>2</sub>,..., <i>v<sub>s</sub></i>) with: <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-6WqRP_Ew2jE/VVJjF6kevfI/AAAAAAAAk0k/J9dQByCjp6s/s1600/AggregatedAffEq.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-6WqRP_Ew2jE/VVJjF6kevfI/AAAAAAAAk0k/J9dQByCjp6s/s1600/AggregatedAffEq.jpg" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-O964WUBLUZ0/VVJd-a-4W4I/AAAAAAAAk0Y/E1-HuhKZWOg/s1600/AggregatedAffinity.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-O964WUBLUZ0/VVJd-a-4W4I/AAAAAAAAk0Y/E1-HuhKZWOg/s1600/AggregatedAffinity.jpg" /></a></div><br />These two variations discussed here have been implemented in VisuMap v4.2.912 with a new affinity metric named Sequence2 Affinity. The following short video shows some GMS maps created with the new affinity metric for some short sequences with 40 to 60 nodes:<br /><br /><center><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/6TUCyYC-gNY?hd=1&vq=hd720" width="460"></iframe></center> </div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-55923352818138480522015-03-24T13:25:00.001-07:002015-03-24T13:27:07.841-07:00On the origin of the helix structure from the view point of GMS<div dir="ltr" style="text-align: left;" trbidi="on">Helix structure has been prevalent in biological world. It can be fund in small scale like the folding of DNA and protein sequences, and in large scale like plants.<br /><br /><center><a href="http://upload.wikimedia.org/wikipedia/commons/d/db/DNA_orbit_animated_static_thumb.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/d/db/DNA_orbit_animated_static_thumb.png" height="320" width="172" /></a> <a href="http://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/DirkvdM_natural_spiral.jpg/360px-DirkvdM_natural_spiral.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/thumb/3/3d/DirkvdM_natural_spiral.jpg/360px-DirkvdM_natural_spiral.jpg" height="320" width="240" /></a></center><br />Whereas the mathematical description of the helix structure is clear; the mechanism that gives rise to such structure is not so obvious. Do all those structure share a common mechanism? Why don't they show up in inorganic world? This note tries to demonstrate with GMS samples that helix structure comes about by some simple dynamics rooted in the discrete sequence structure; <br /><br />Recall that GMS produces a sequence of high dimensional vectors from a discrete sequences; and the high dimensional vectors are embedded in to low dimensional space according their <i>affinity</i> defined in the following <i>affinity function</i>:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-Fm0qLwpEubA/VF0JL2IRFgI/AAAAAAAAaVs/gYGceAQgBzI/s1600/GmsAffinity.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Fm0qLwpEubA/VF0JL2IRFgI/AAAAAAAAaVs/gYGceAQgBzI/s1600/GmsAffinity.jpg" height="145" width="400" /></a></div><br />Where t>t' are the timestamps of two vectors produced at type t and t'. We notice that the affinity function consists of two parts: <br /><center><img border="0" src="http://1.bp.blogspot.com/-OQbRhB9Bk_M/VRCKQLg05NI/AAAAAAAAjwg/2FJgs6ipuX4/s1600/GMSAffinityA.png" /> and <img border="0" src="http://4.bp.blogspot.com/-7jTxTCsbVyg/VRCKTEaxojI/AAAAAAAAjwo/q6TbA258vDI/s1600/GMSAffinityB.png" /> </center><br /><br />The first part is only dependent on the timestamp: it reduces the affinity calculated by the second part exponentially depending on time separating these two vectors. It is the second part that accounts for variations in the sequence, s<sub><i>i</i></sub>.<br /><br />In order the see the effect of decay in time dimension, I modified the <a href="http://jamesxli.blogspot.ca/2015/02/on-loopy-gms-geometrical-modeling-of.html">loopy GMS algorithm</a> so that the affinity function only contains the first part while the second part (the sequence dependent part) is set to the constant 1.0. I ran the algorithm with following parameters: n=25000; K=10; the sequence is the constant sequence with 100 copies of the letter 'A'; the decay speed λ is successively set to 0.125, 0.25, 0.5 and 1.0. The following screen cast shows the resulting corresponding GMS maps in 3D space:<br /><center><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/mLnXhGDDMHE?hd=1&vq=hd720" width="640"></iframe></center><br />These maps show clearly the helix alike structure; and the winding number goes up as the as decay speed λ goes higher. <br /><br />To verify that it is the exponential decaying that leads to helix alike structure, I have replaced the affinity function with three different "decaying" functions: Δt<sup>-2</sup>, Δt<sup>-1</sup> and Δt<sup>-0.5</sup> with Δt := t-t'. The following pictures shows the corresponding GMS maps: <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-J2hy9fs8Dig/VRGtEWb7szI/AAAAAAAAjw4/YJlEooYFYD8/s1600/HelixNo.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-J2hy9fs8Dig/VRGtEWb7szI/AAAAAAAAjw4/YJlEooYFYD8/s1600/HelixNo.jpg" /></a></div><br />We can see clearly that these decay functions result in structures that are totally different from the helix structure. Thus, these simulations indicate that the exponential decay of affinity plays a significant role in forming the helix structure.<br /><br />We notice that if the scanning size K is sufficiently large and the sequence is random, the affinity contribution of the second (sequence dependent) part will, more or less, become constant. Thus, the helix structure may serve as model for completely random sequences. From this point of view, we might call the helix structure the no-information base model. Additional information in sequences should manifest in discrepancy between their GMS maps and the helix structure.<br /><br />Seeing some π symbols in some formula, the great physicist Richard Fynman once asked: Where is the circle? In terms of modern genetics, his remark basically assumes that π is the "gene" for the circle as a phenotype feature. Many such analytical "genes" are carried forward and spread around in various fields, they manifest in different forms, but always keep their intrinsic unchanged. Here, this note tried to demonstrate that the "gene" for helix structure is the exponential decay of affinity.<br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-51612436677960016562015-02-02T19:17:00.003-08:002015-02-02T19:38:09.922-08:00On Loopy GMS (Geometrical Modeling of Sequences)<div dir="ltr" style="text-align: left;" trbidi="on">This note is going to apply the <a href="http://jamesxli.blogspot.ca/2014/11/geometric-modeling-of-sequential-data.html">GMS model</a> to loop structured sequences. The adaptation from previous serial structure to loopy structure is straightforward: the only change is, as illustrated in the following diagram, that the two ends of the serial sequence are now connected to each other.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-yduECphfRRU/VMl4KGhbrCI/AAAAAAAAihc/JA2xaZVApXs/s1600/LoopyGMS.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-yduECphfRRU/VMl4KGhbrCI/AAAAAAAAihc/JA2xaZVApXs/s1600/LoopyGMS.png" height="207" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"></div><br />When applying the GMS model, the scanning machine ( or the <i>scanner</i> ) runs over the loopy sequence and produces a series of high dimensional vectors which are argumented with timestamps. Those high dimensional vectors will then be embedded into low dimensional space with the <a href="http://jamesxli.blogspot.ca/2014/09/on-affinity-embedding.html">affinity embedding</a> algorithm.<br /><br />The first question arise in this new scenario is whether the GMS model will produce loop alike geometrical shapes, when the scanner runs a whole loop back to its initial location. The answer to this question is yes, but under two conditions. First, the decay speed parameter λ used to calculate the affinity between vectors must be zero, so that the effect of timestamps will be nullified. Otherwise, if λ is not zero, different vectors will be produced when the scanner comes back to its initial position.<br /><br />Secondly, the total number of nodes, L, in the loop must be a multiple of the dimension of the output vector (without the timestampe component), i.e. the parameter K denoted in <a href="http://jamesxli.blogspot.ca/2014/11/geometric-modeling-of-sequential-data.html">previous note</a>. This requirement is necessary because of the <i>circular shifting</i> of the scanned vectors. If L is not a multiple of K, after a whole loop the scanner will produce the same set of values, but circularly shifted to different order, so that they will normally be different as vectors in the high dimensional space.<br /><br />The following pictures shows the resulting maps produced by GMS for a short loopy sequence for different cases discussed above.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-ZtSdqudwJpY/VMpjCoY0YZI/AAAAAAAAihs/zOdUKayQkSk/s1600/LoopGMS2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-ZtSdqudwJpY/VMpjCoY0YZI/AAAAAAAAihs/zOdUKayQkSk/s1600/LoopGMS2.png" height="496" width="640" /></a></div><div style="text-align: left;"> <b>Figure 1: </b>Conditions for loopy output maps from loopy sequence. The input sequence is "CCC TGT GGA GCC GGA GCC ACA AGT", K=6. (A) The decay speed λ=0.1, the resulting map is a broken loop in the 3D space; (B) The sequence is extended with an extra node 'G', so that K is not a multiple of L. The resulting map is a broken loop. (C) λ = 0 and L is a multiple of K, the resulting map is a loop in the 3D space.</div><br />With a loopy sequence, the scanner can in general run multiple rounds around the loop sequence. In this way, the scanner can produce multiple sets of vectors offset just by different timestamps. These vectors sets form repeating patterns when embedded in to low dimensional space. The following short video shows the resulting maps for above sequence for repeat number <i>n = 1, 2, 5 </i>and<i> 10. </i>The decay speed λ is set to 0.1. We notice that in these maps, the repeated structure gradually become flattened when the repeat number increases to high number. Consequently, the output map changes from simple repeated structure to tube alike shape. We notice that both repeated structures and tubes are pretty frequent structures in biological systems as phenotypic traits.<br /><br /> <div align="center"><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/BuYN39QpDVY?hd=1&vq=hd720" width="490"></iframe></div> <br />We notice that running the scanner multiple loops is effectively the same as scannng multiple copies of a sequence sequentially. Thus, the loopy scanning might be considered as a manifestation (or extension ) of sequential scanning where the sequence is a the concatenation of variable number of copies of a reference sequence. <br /><br />Another extension of the loopy GMS model is, as depicted in the following diagram, using two scanners to scan the loop in the opposite orientations, then concatenate their outputs and a timestampe to form final high dimensional vectors.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-5tbnOeKQ3Ag/VM_HHeEQaXI/AAAAAAAAjB4/-v3Hi7vh7nw/s1600/DualScanner.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-5tbnOeKQ3Ag/VM_HHeEQaXI/AAAAAAAAjB4/-v3Hi7vh7nw/s1600/DualScanner.png" height="343" width="400" /></a></div>The loopy GMS model with dual scanners normally produces symmetrical shapes. The reason for this is that for each <i>configuration of the scanner</i> (i.e. their specific positions on the loop during the scanning) there is always a "dual" configuration in which the two scanners simply swapped their positions. Because of this, the whole collection of output vectors may be split into two sets which differ from each other just by the different timestamps. The following short video shows various symmetrical shapes produced by loopy GMS with dual scanners for various sequences:<br /><br /> <div align="center"><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/YX4eduERhTA?hd=1&vq=hd720" width="450"></iframe><br /></div> <br /><b>Discussion</b><br /><br />This note has experimentally demonstrated that GMS model with simple extension may produce interesting macroscopical shapes, like repeating pattern, tubes and symmetrical structures. Understanding how those discrete sequences give rise to geometrical pattern might help us to investigate how genetic code determines phenotypical traits biological organism. <br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-48338895181190340362014-11-08T13:25:00.001-08:002015-05-12T11:33:27.304-07:00Geometric Modeling of Sequential Data (continued)<div dir="ltr" style="text-align: left;" trbidi="on">In a previous note <a href="https://www.blogger.com/">geometrical modeling of sequential data</a> (GMS) I have described a framework to convert discrete sequential data to 3D geometrical shapes. This note is going to extend GMS to a general form that allows more efficient sampling of higher dimensional data from sequence data. <br /><br /><b>The GMS framework</b><br /><br />Recall that GMS consists of three basic steps: 1. Scanning a sequence to produces a collection of high dimensional vectors; 2: Reordering the components of these vectors to harmonize sampled vectors; 3: Applying a <i>dimensionality algorithm</i> to embed these vectors into low dimensional space. For the first step, we start with a sequence <i>s<sub>k</sub></i>∈S; k=0, 1, 2, ..., K; where S is a finite set of alphabets. Imaging that the sequence has been put through a <i>scanning machine</i> that quickly takes series of snapshots of the part that is just in the machine:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-oeMecRVW5R4/VF0If2tzohI/AAAAAAAAaVI/eyLi8DogZKA/s1600/GmsScanning.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="206" src="http://3.bp.blogspot.com/-oeMecRVW5R4/VF0If2tzohI/AAAAAAAAaVI/eyLi8DogZKA/s1600/GmsScanning.jpg" width="400" /></a></div><br />In above that diagram, the illustrated scanning machine is sized to hold 3 nodes of the sequence. As output the machines produces a series of sampling vectors <b>V</b><sub><i>t</i></sub> for a series of discrete time points<i></i>. A sampling vector <b>V</b><sub><i>t</i></sub> is actually a 2<i>n+</i>1 dimensional vectors (α<sub>0</sub>, ..., α<sub><i>n</i>-1</sub>; s<sub>k</sub>, ..., s<sub><i>k+n</i>-1</sub>; <i>t</i>). The above diagram illustrates the case <i>n</i>=3. Here, s<sub><i>k</i></sub>, ..., s<sub><i>k+n</i>-1</sub> are the type of the <i>n</i> nodes currently passing the machine; α<sub>0</sub>, ..., α<sub><i>n</i>-1</sub> are the coefficients, called <i>amplitudes,</i> for the corresponding nodes. For the sack of compactness we denote the pair (α<sub><i>k</i></sub>, s<sub>k</sub>) simply as α<sub><i>k</i></sub>s<sub>k</sub> in above diagram. <br /><br />The amplitude α<i><sub>k</sub></i> for the k-th node at time <i>t </i>is calculated as following:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-BlqdVlMLHEY/VF0IvQa_WKI/AAAAAAAAaVQ/se7UBtrfRi8/s1600/GmsAmplitude.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="34" src="http://4.bp.blogspot.com/-BlqdVlMLHEY/VF0IvQa_WKI/AAAAAAAAaVQ/se7UBtrfRi8/s1600/GmsAmplitude.jpg" width="200" /></a></div><br />The right side of above formula is called the amplifying function. This amplifying function implies that the node sequence passes the scanning machine by the speed of one node per time unit. In general, any continues function that is zero at point 0 and <i>n </i>could be used as an amplifying function.<br /><br />We notice that the above formula specifies a continues function with respect <i>t </i>as long as no new node entered the machine to replace an old one. In case that a new node, say s<sub>k-1</sub><i>,</i> entered the machine to replace an old one, say s<sub>k+2</sub>, at a time between a small interval t and t'; the sampling vector will undergo the following change:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-8RQpz--VUCU/VF0I3oZ3oZI/AAAAAAAAaVY/OLKtbnrCO0A/s1600/GmsNewNode.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="158" src="http://1.bp.blogspot.com/-8RQpz--VUCU/VF0I3oZ3oZI/AAAAAAAAaVY/OLKtbnrCO0A/s1600/GmsNewNode.jpg" width="400" /></a></div><br />Since the amplifying function vanishes at the entry 0 and exit points <i>n</i>, the amplitude α<sub>2</sub> and α'<sub>0</sub> will be close to zero; and since the amplifying function is continues, we'll have α<sub>0</sub>≈α'<sub>1</sub> and α<sub>1</sub>≈α'<sub>2</sub>. Thus,<br /> when we apply a circular shifting on the <b>V</b><sub>t'</sub> as illustrated in the following diagram, the shifted vector <b>V̂</b><sub>t'</sub> will be close the <b>V</b><sub>t</sub>:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-zF20eB9lRsI/VF0JELWzQBI/AAAAAAAAaVg/-pmYYqZNGvE/s1600/GmsCircularShifting.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="155" src="http://2.bp.blogspot.com/-zF20eB9lRsI/VF0JELWzQBI/AAAAAAAAaVg/-pmYYqZNGvE/s1600/GmsCircularShifting.jpg" width="320" /></a></div><br />Notice that the third components of <b>V̂</b><sub>t'</sub> and <b>V</b><sub>t</sub> may be values for different node types (e.g. s<sub><i>k</i>-1</sub> ≠ s<sub><i>k</i>+2</sub>), but the amplitudes α<sub>2</sub> and α'<sub>0</sub> are close to zero, so that <b>V̂</b><sub>t'</sub> will be close the <b>V</b><sub>t</sub> as high dimensional vectors. Thus, after applied the circular shifting operation, the scanning machine will produces a series of gradually changing vectors in high dimensional space. <br /><br />More generally for the implementation of the second step, the scanning machine will have a circular <i>shifting operator</i> to cicularly shift the sampled vectors; and the shifting operator will increment its shifting length by 1 every time when a new node entered the machine.<br /><br />As third step, a <i>dimensionality reduction</i> algorithm will be applied on the vectors to embed them into a low dimension space. For this study I picked the <a href="http://jamesxli.blogspot.ca/2014/09/on-affinity-embedding.html"><i>affinity embedding</i></a> (AE) algorithm. I have used the <a href="http://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding">t-SNE</a> algorithm in the <a href="http://jamesxli.blogspot.ca/2014/09/on-geometric-modeling-of-sequential-data.html">previous note</a>, but by my experiments, AE worked normally better for this purpose, as it runs much faster for large dataset. To apply AE algorithm we need either to define a <i>metric distances</i> to measure dissimilarity between data points; or an <i>affinity function</i> that measures the similarities (or kind of attraction ) between data points. For this study, we uses the following affinity function:<br /><a href="https://www.blogger.com/null" name="AffinityFunction" id="AffinityFunction"></a> <br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Fm0qLwpEubA/VF0JL2IRFgI/AAAAAAAAaVo/ETLFaUOqes4/s1600/GmsAffinity.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="145" src="http://3.bp.blogspot.com/-Fm0qLwpEubA/VF0JL2IRFgI/AAAAAAAAaVo/ETLFaUOqes4/s1600/GmsAffinity.jpg" width="400" /></a></div><br />where the summation runs over all <i>k </i>between 0 and <i>n</i>-1<i>, </i>such that s<sub><i>k</i></sub>= s'<sub><i>k</i><i> </i></sub>; λ is an algorithmic constant that can be any positive values. In clear text, if we define the <i>affinity </i>between two nodes as the product of their amplitudes, then the affinity between two sampling vectors is then the sum of affinities between all matching nodes decayed by the time elapsed between the two samplings; The constant λ, called the <i>decay speed</i>, controls how fast that affinity diminishes depending on the time elapsed between the two samplings. More particularly, the real affinity will reduce to half of its value if the two sampling vectors are separated by λ<i>n</i> nodes.<br /><br /><b>The effect of the scanning size <i>n</i></b><br /><br />The scanning size <i>n</i> is a key parameter for the scanning machine, it determines how many consecutive nodes of the sequence will be read (or scanned) to construct an output vector. For larger scanning size, the affinities between these data point will be an aggregation of larger number of consecutive nodes. Thus, the scanning size controls a kind of granularity of the scanning machine.<br /><br />In order to see effect of the scanning size, I have downloaded the DNA sequence of a relative short gene, the CD8 gene that consists of 744 base pairs. This sequence will then be processed by the GMS framework to create 25000 data points; and then embedded in to the 3D space. The following video shows the resulting 3D maps created with different scanning size: <iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/mTjzDQ4jJ3o?hd=1&vq=hd720" width="640"></iframe> <br /><br />We see clearly that the 3D curve gradually become simpler and smoother as the scanning size grows from 3 to 24.<br /><br /><b>The effect of the decay speed </b>λ<br /><br />The decay speed λ provides a way to differentiate samplings created at different time. A larger λ means that the affinity between similar node pattern will diminish faster as the elapsed time between them grows longer. To demonstrate the effect of decay speed I have created a series of maps for the CD8 gene sequence with increasing λ. The following short video shows those maps:<br /><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/NZUMIt1Nt-0?hd=1&vq=hd720" width="640"></iframe> <br /><br />We can see clearly that the map stretches gradually as λ grows from 0 to 0.08. <br /><br /><b>Implementation GMS in VisuMap</b><br /><br />VisuMap version 4.2.905 offers support for GMS framework through two new metrics "Sequence Metric" and "Sequence Affinity". The former is a distance metric that can be used by most mapping algorithm, the latter is an affinity metric that can only be used by the <i>affinity embedding</i> algorithm.<br /><br />In order to create a model for a sequence with VisuMap, we first create data table with a single columns and with 5 to 10 thousands rows. The content of the table is not relevant for the modelling, only the number of rows will be used by the scanning machining as the number of samplings (The link <a href="http://www.visumap.com/rc1.aspx?f=SeqVis.xvmz&g=res">SeqVis</a> provides a such sample dataset together with some sample maps and supporting scripts.) We then open the map configuration window to select "Sequence Affinity" as the metric; and specify a new filter as settings for the scanning machine. The following pictures shows the settings for CD8 sequence used in previous examples.<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-JtejWAdteA4/VFvq2aMHsaI/AAAAAAAAaUU/hWEHKwYTtxQ/s1600/ScanningSettings.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="204" src="http://2.bp.blogspot.com/-JtejWAdteA4/VFvq2aMHsaI/AAAAAAAAaUU/hWEHKwYTtxQ/s1600/ScanningSettings.png" width="400" /></a></div><span id="goog_1556808571"></span><span id="goog_1556808572"></span><br />Notice that the field labeled "Stretch Factor" set value for the <i>decay speed</i>, since in normal cases this parameter defines how far the resulting maps will be stretched along the sequences direction.<br /><br />Also notice that spaces and newline characters in the sequence given in Sequence Definition window will be ignored by the scanning machine. With this filter definition window we can easily generate 3D maps for arbitrary sequences. The following pictures show some 3D maps together with their corresponding sequences:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-t5eSzjVTsUE/VF0E-mh7A0I/AAAAAAAAaUk/XiFmNflAxCk/s1600/Loop1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="273" src="http://4.bp.blogspot.com/-t5eSzjVTsUE/VF0E-mh7A0I/AAAAAAAAaUk/XiFmNflAxCk/s1600/Loop1.jpg" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-fo6EA3MTIN0/VF0E_0HJ8GI/AAAAAAAAaUs/DsAWivFyi4k/s1600/Loop2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="272" src="http://2.bp.blogspot.com/-fo6EA3MTIN0/VF0E_0HJ8GI/AAAAAAAAaUs/DsAWivFyi4k/s1600/Loop2.jpg" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-3d6qHE_DFFk/VF0FBYpkooI/AAAAAAAAaU0/ZGHy3KcbD_I/s1600/Loop3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="275" src="http://3.bp.blogspot.com/-3d6qHE_DFFk/VF0FBYpkooI/AAAAAAAAaU0/ZGHy3KcbD_I/s1600/Loop3.jpg" width="400" /></a></div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-NSKaQmaxCKM/VF0FCiFPvGI/AAAAAAAAaU8/yDq4-mheK4k/s1600/Loop4.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="271" src="http://1.bp.blogspot.com/-NSKaQmaxCKM/VF0FCiFPvGI/AAAAAAAAaU8/yDq4-mheK4k/s1600/Loop4.jpg" width="400" /></a></div><br /><br /><b>Discussions</b><br /><br />We have extended the GMS framework with large scanning size and an more efficient dimensionality reduction algorithm. With these extensions we can model much large large sequences from different perspectives; and therefore capture more information from more realistic sequences.<br /><br />We notice that the proposed framework has certain similarity with the ribosomes machine that translates RNA sequences to proteins. Just like the biology scientists believe all macroscopic pattern and features are ultimately encoded in the DNA sequences, I believe that GMS can capture a large class of relevant patterns in discrete sequences with 3-dimensional geometrical models. GMS thus offers us a toy ribosomes machine to simulate the translation of sequential information to geometrical models.<br /><br /><br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-14772149227748644862014-09-23T09:13:00.003-07:002014-09-29T07:25:48.845-07:00On geometric modeling of sequential data<div dir="ltr" style="text-align: left;" trbidi="on">The biological cellular system appears to be a universal machine that is capable to translate discrete sequential data, i.e. <i>genetic code</i>, to all kinds of organisms with diverse <i>phonetic features</i>. In this note I am going to put forward a computational framework that translates discrete sequential data to lower dimensional <i>geometrical shapes</i>. Inspired by biological cellular system, this framework first samples high dimensional vectors from the discrete data; then it applies <i>dimensionality reduction</i> methods to embed those vectors into low dimensional space. The resulting map normally forms shapes with geometrical properties that reflect information in initial sequence.<br /><br />Let's start with an input sequence of nucleic acids <i>s<sub>k</sub></i>; k=0, 1, 2, ..., K with <i>s<sub>k</sub></i> ∈ S:={C, G, A, T}. To illustrate the sampling process, we image that the sequence is a series of marbles of 4 different imaginary colors. Mathematically, these "colors", i.e. the set S, can be represented by 4 dimensional vectors each with just a single non-zero component as follows:<br /><br /><div class="separator" style="clear: both; text-align: left;"><a href="http://3.bp.blogspot.com/-3J1WIM42RPY/VBoAGU2DxMI/AAAAAAAAaGs/r_g4H32EHJk/s1600/CGAT2Vector.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-3J1WIM42RPY/VBoAGU2DxMI/AAAAAAAAaGs/r_g4H32EHJk/s1600/CGAT2Vector.jpg" /></a></div><div class="separator" style="clear: both; text-align: center;"></div>Now, image we have a camera that moves slowly from one end to the other end of the sequence; and during the movement we quickly take a series of pictures of the two closest marbles. These pictures record different colors and intensities of the two closest marbles. As depicted in the following schema, we code those "pictures" as 8 dimensional vectors:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-EJZK8pQY5es/VBpP23Kn5_I/AAAAAAAAaHQ/Q-03WZP9rd0/s1600/SeqentialSampling.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-EJZK8pQY5es/VBpP23Kn5_I/AAAAAAAAaHQ/Q-03WZP9rd0/s1600/SeqentialSampling.jpg" /></a></div>In above schema, α and β are the intensities coefficients for the two closest marbles (they can be multiplied directly into the first half and second of the 8-vectors correspondingly). α and β are calculated as follows: We assume that the camera moves evenly one marble per time unit. So at given time, say <i>t<sub>1</sub></i>, the camera is located between k-th and k+1-th marble; then the <i>phase</i> of the camera is calculated as: <i>p<sub>1</sub></i> := <i>t<sub>1</sub></i> - k ; where k the largest integer smaller than <i>t<sub>1</sub></i>, i.e. k=⌊<i>t<sub>1</sub></i>⌋. Then, α and β are calculated as: α := cos(<i>p<sub>1</sub></i>π/2); β:=sin(<i>p<sub>1</sub></i>π/2). We notice here that when the camera moves smoothly from k to k+1-ε for any infinitesimal small number ε, the pair (α, β) changes smooth from (1, 0) to (0, 1).<br /><br />However, at the time point k+1, the pair (α, β) takes the value (1, 0), since the camera is now considered between the k+1-th and k+2-th marble. That means, (α, β) as a time dependent 2-dimensional variable is not continues at the integer time points; Consequently, the corresponding 8-vectors in general won't change smoothly in the 8-dimensional space.<br /><br />In order to avoid this discontinuity, we swap the two 4-vectors of a 8-vectors for those vectors produces when ⌊<i>t<sub>1</sub></i>⌋ is an odd integer. That means in above schema, at the time <i>t<sub>2</sub></i>, the 8-vector produced is swapped from v<sub>2</sub> to v'<sub>2</sub> as shown in the following diagram:<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-joAHjnyIbrs/VBpP5k5GurI/AAAAAAAAaHY/25K_RLlPqt4/s1600/Swap4Vectors.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-joAHjnyIbrs/VBpP5k5GurI/AAAAAAAAaHY/25K_RLlPqt4/s1600/Swap4Vectors.jpg" /></a></div>With this swapping operation, we can easily verify that the 8-dimensinal vectors sequence actually changes smoothly as the camera smoothly processes.<br /><br />Additionally to these 8 components, the sampling process at each step also adds a timestamp <i>ct</i> as the 9-th components to the output vectors, where <i>c</i> is a small constant chosen as an algorithm parameter.<br /><br />Summing-up above steps, the sampling process can be denoted as an operator that operates on discrete sequences <i>s</i> as following:<br /><div class="separator" style="clear: both; text-align: left;"><a href="http://2.bp.blogspot.com/-3NZie0DacTQ/VBslB-tJGNI/AAAAAAAAaHw/3HW5zotBw5g/s1600/SamplingOperator.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-3NZie0DacTQ/VBslB-tJGNI/AAAAAAAAaHw/3HW5zotBw5g/s1600/SamplingOperator.png" height="208" width="320" /> </a></div><div class="separator" style="clear: both; text-align: left;">where t ∈[0, K] is the the time parameter; <i>c </i>is an algorithmic parameter; k:= ⌊<i>t</i>⌋ is the sequential index; α := cos(<i>p</i>π/2) and β:=sin(<i>p</i>π/2) with p:= t- k as intensity coefficients for the two closest marbles.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">After we have sampled a set of 9-dimensional vectors, we then use a dimensionality reduction (DR) algorithm to embed those vectors into 2- or 3-dimensional space. For our framework, we need a DR algorithm that is strong in preserving local information, since the "camera" basically samples only local information. Algorithms like <a href="http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Relational_perspective_map">RPM</a>, <a href="http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Curvilinear_distance_analysis">CCA</a>, <a href="http://jamesxli.blogspot.ca/2014/09/on-affinity-embedding.html">Affinity Embedding</a> and <a href="http://en.wikipedia.org/wiki/T-distributed_stochastic_neighbor_embedding">t-SNE</a> appear to be good candidates for this purpose. For this note I picked the t-SNE algorithm implemented in VisuMap software. </div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">As first example we use our algorithm to model a short nucleic acid sequence "CCCTGTGGAGCCACACCCTAG". Notice that the timestamp coefficient <i>c </i>in above description is the only component that grows with time unlimitedly; whereas the other 8 components are confined to the range between 0 and 1.0. Thus, larger <i>c </i>will likely stretch the model further apart. In order to show this property, we created four different data sets with c=0, 0.5/N, 2.5/N, and 10/N; where N is the number of samplings, which is in this note 15000. The following four clips show the resulting geometric models of these data sets. For the sack of clarity, I have colored the data points with increasing brightness as time progresses; and when the scanning camera is at closest to a node, the corresponding sampling data point will be represented with an icon that indicates the type of the nucleic acid.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/efGBVv5LbV8?hd=1&vq=hd720" width="640"></iframe> <iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/gc8QLgLpinQ?hd=1&vq=hd720" width="640"></iframe> <iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/bfnnjPCd68w?hd=1&vq=hd720" width="640"></iframe> <iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/qxTFjBJbZXA?hd=1&vq=hd720" width="640"></iframe> <br /><div class="separator" style="clear: both; text-align: left;">Above models showed clearly that the coefficient <i>c</i> controls how far the model get stretched. The other 8 components contributes information to the folding structure.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The second example compares models created by reversed sequence. The following clip compares the model of previous sequence with the model of its reversed sequence. We can see that the two models are geometrically more or less unchanged, except that the gradient direction is reversed. This means that the described modeling method is invariant under the sequence inversion.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/aejhglxavZo?hd=1&vq=hd720" width="640"></iframe> <br /><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The third example shows how the model changes when the sequence is duplicated multiple times. The following video clip shows the model of our sample sequence and the model of 5-time duplicates of that sequences. We can see that model of the 5-time duplicates resembles 5 time overlaps of the single sequence model.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/t82YgOGIjlc?hd=1&vq=hd720" width="640"></iframe> <br /><div class="separator" style="clear: both; text-align: left;">As next example, we created a model for the protein sequence MELNSSFWTLIKTKMKSRNDNNKLLDTWLDPIEYVSTTGSADRP. The slightly more complicated model is shown in the following video clip: </div><br /><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/DlJSNHtjtOE?hd=1&vq=hd720" width="640"></iframe> <br /><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We notice that the sampled vectors are all sparse vectors. In practical implementations, we can directly calculate the Euclidean distance between those vectors, without explicitly calculating those high dimensional vectors. In this way, we can model sequences of any base set. As final example</div><div class="separator" style="clear: both; text-align: left;">I have created a 3D model for the text sequence "One fish two fish red fish blue fish" as show in the following video clip:</div><div class="separator" style="clear: both; text-align: left;"><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/Sw1GMzhE30E?hd=1&vq=hd720" width="640"></iframe> <br /><br /><b>Discussion </b><br /><br />The framework described here demonstrates a new way to derive geometrical models from discrete data. The framework is abstract in the sense that it is independent of the physical and chemical properties of the sequence.The experiments demonstrates that high dimensional feature space might be an effective way, as an intermediate stage, to derive 3 dimensional phonetic structure.<br /><br />We notice that the models created above are all basically 1-dimensional curves folded in different ways. For the future study, it would be interesting to develop 2-dimensional sampling method that produces 2- or 3-dimensional models. More speculatively for future study, can we develop an evolutionary process to find a sequence that gives rise to models with certain properties?<br /><br /><br /><br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-78179937207237793822014-09-15T11:30:00.000-07:002014-09-15T11:30:10.149-07:00On affinity embedding <div dir="ltr" style="text-align: left;" trbidi="on">We have just released <a href="http://visumap.com/index.aspx?p=Products">VisuMap version 4.2.902</a>. A new mapping algorithm called Affinity Embedding (AE) has been introduced in this release. I'll briefly describe AE as a multidimensional scaling method(MDS); or, in more recent terminology, as a dimensionality reduction method.<br /><br />Like many MDS methods, AE tries to create a low dimensional representation of high dimensional data points by minimizing a stress function. AE's stress function is as following:<br /><div class="separator" style="clear: both; text-align: left;"><a href="http://1.bp.blogspot.com/-Sc2q32fXY-A/VBcZJg_HgYI/AAAAAAAAaFM/KFQa5fV-_jo/s1600/AEStress.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://1.bp.blogspot.com/-Sc2q32fXY-A/VBcZJg_HgYI/AAAAAAAAaFM/KFQa5fV-_jo/s1600/AEStress.png" /></a></div>where <i>a<sub>ij</sub></i> is a positive number representing the affinity between data points i and j; <i>d<sub>ij</sub></i> is the Euclidean distance between that two points in the low dimensional space. <i>Affinity </i>as a generic concept quantifies certain kind of similarity between data entities. We notice that most MDS methods try to minimize difference (e.g. squared error) between a dissimilarity matrix and <i>d<sub>ij</sub></i>; whereas AE tries to approximate the similarity matrix.<br /><br />Since many data sets from practices have been characterized by a distance matrix <i>δ<sub>ij</sub></i>, VisuMap uses the following exponential <i>affinity function</i> to convert a distance matrix to an affinity matrix: <br /><div class="separator" style="clear: both; text-align: left;"> <a href="http://3.bp.blogspot.com/-LxtjVn1U9sY/VBce0p0HYyI/AAAAAAAAaFY/kWkmAahSg6o/s1600/ExpAffinityFunction.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-LxtjVn1U9sY/VBce0p0HYyI/AAAAAAAAaFY/kWkmAahSg6o/s1600/ExpAffinityFunction.png" /></a></div>where <i>c<sub>ij</sub></i> are constants estimated based on densities around data points in the high dimensional space. This affinity function is just one simple way to convert dissimilarity to similarity; in fact any monotonous decreasing function may be used as affinity function. For more general applications, VisuMap offers a plugin interface to allow users to implement custom affinities.<br /><br />AE uses a simple version of gradient descent method to minimize the stress function. The computational complexity for each iterative step is O(<i>kN</i>) where N is the number of data points and <i>k </i>is constant that is typically around value 100. Moreover, the memory complexity of AE for each step is O(N). Thus, AE can be applied on relative large data sets, i.e. with a million and more data points.<br /><br />The main characteristics of AE is its focus on local information (i.e. data with high affinity or low dissimilarity). The reason for this is that lower affinity are represented by close-to zero values and the optimization algorithm can more effectively ignore zero values; whereas other algorithms like CCA or Sammon map need to reduce the effect of large values (e.g. large distance or large dissimilarities.) which is normally harder to do.<br /><br />AE has been largely inspired by the algorithm Stochastic Neighbor Embedding(SNE) and the more recent version t-SNE. Both SNE algorithms work pretty well in focusing on local information and both of them use a kind of exponential affinity functions in their stress function. However, because of their probabilistic nature, they require in general quadratic computational and memory complexity, which makes it difficult to attack large data sets. <br /><br />Like SNE, AE works pretty well in unfolding closed manifolds. The following pictures compare AE and t-SNE in mapping three spheres of different sizes in the 3D space to 2D plane. Notice that AE somehow preserves the size the three sphers; whereas t-SNE basically ignores the size information:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-8XrrUGq_jiE/VBctqqYV9iI/AAAAAAAAaFk/vmULxxnl6u4/s1600/ThreeSpheres.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-8XrrUGq_jiE/VBctqqYV9iI/AAAAAAAAaFk/vmULxxnl6u4/s1600/ThreeSpheres.jpg" height="261" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;">Data set forming three spheres in 3D space.</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-u4m4XgG3YFI/VBcttEwjxxI/AAAAAAAAaFs/g_zpxieF05c/s1600/ThreeSpSNE.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-u4m4XgG3YFI/VBcttEwjxxI/AAAAAAAAaFs/g_zpxieF05c/s1600/ThreeSpSNE.jpg" height="310" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;">The t-SNE map for the three sphere data set.</div><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-LqsgJsP-v3M/VBctubwIeTI/AAAAAAAAaF0/Td9092RQ6l8/s1600/ThreeSpAE.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-LqsgJsP-v3M/VBctubwIeTI/AAAAAAAAaF0/Td9092RQ6l8/s1600/ThreeSpAE.jpg" height="296" width="320" /></a></div><div class="separator" style="clear: both; text-align: center;">The AE map for the three sphere data set. </div><br /><br /><br /><br /><br /><br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-21871769808886989482014-01-15T12:17:00.002-08:002014-01-15T12:17:37.411-08:00VisuMap 4.2.896 Released<div dir="ltr" style="text-align: left;" trbidi="on">I have just released our software package <a href="http://www.visumap.com/index.aspx?p=Products">VisuMap version 4.2.896</a>. In this release we have upgraded the graphics library DirectX from version 9 to version 11. Due to this upgrade some of 3D animation data views have improved performance by more than 10 folds. VisuMap is now capable to interactively explore more than one million data points in 3D view on a normal desktop computer.<br /><br />Also with this upgrade, VisuMap will require OS system Windows 7 or higher. Windows XP will be continually supported, but without new features which take advantage of DirectX 11 library. In the future, we'll likly implement more performance enhancements by tackling the computation power of GPU.<br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-54440263089067210122013-11-04T15:39:00.002-08:002014-09-24T11:02:26.981-07:00On Multidimensional Sorting<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on">In a <a href="http://jamesxli.blogspot.ca/2013/10/on-shape-of-pictures.html">previous blog post</a> I have talked about a method to convert photos to high dimensional datasets for analysis with MDS methods. This post will address the opposite problem: given a high dimensional dataset, can we convert it to a photo alike 2D image for easy feature recognition by human?<br /><br />Let's consider our sample dataset S&P 500. A direct approach to create a 2D map is to simply convert the matrix of real values to a matrix of gray-scale pixels. This can be easily done in VisuMap with the heatmap view and the gray-scale spectrum. The following picture shows the S&P 500 dataset in normal line diagram view on the left side and the corresponding heatmap on the right side:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-BLS-zzBhl_E/UnKT5Lz3orI/AAAAAAAAJyU/kU8TgQMgth4/s1600/SP500e.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="284" src="http://3.bp.blogspot.com/-BLS-zzBhl_E/UnKT5Lz3orI/AAAAAAAAJyU/kU8TgQMgth4/s640/SP500e.jpg" width="640" /></a></div><br />Each curve in the left picture corresponds to the price history of a S&P-500 component stock in a year. We notice heavy overlaps among the curves. On the right side, the heatmap represents the stock prices with row strips with different brightness. In the heatmap, we can spot roughly the two systematical downturns at the two slightly darkened vertical strips. There is however no discernible pattern between the rows, as the rows are just ordered by their stock ticks, so that neighboring relationship don't indicate any relationships between their price history.<br /><br />One simple way to improve the heatmap is to group stocks with similar price history together. This is the task of most clustering algorithms. We can do this pretty easily in VisuMap. The following heatmap shows the same dataset after we have applied k-Mean clustering algorithm on the rows and re-shoveled the rows according to cluster assignments:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-eQPfnNZCGcQ/UnKamdDYyXI/AAAAAAAAJyk/uewnab76P2A/s1600/SP500b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://1.bp.blogspot.com/-eQPfnNZCGcQ/UnKamdDYyXI/AAAAAAAAJyk/uewnab76P2A/s400/SP500b.png" width="390" /></a></div><br />The k-Mean algorithm has grouped the rows into about 8 clusters, we can see that rows within each group represent rows with, more or less, similar values. However, the clusters are randomly ordered, and as well as the rows within a cluster. The clustering algorithm does not provide ordering information for individual data points.<br /><br />Can we do better job than the clustering algorithm? To answer this question, let's recall what we actually tried to do above: we want to reorder the rows of the heatmap, so that neighboring rows are similar to each other. This kind task is basically also the task of MDS (Multdimensional Scaling) algorithms, which aim to map high dimensional data to low dimensional spaces while preserving similarity. Particularly in our case, the low dimensional space is just the one-dimensional space, whereas MDS in general have been used to create 2 or 3 dimensional maps.<br /><br />Thus, to improve our heatmap, we apply a MDS algorithm to our dataset to map it to an <i>one-dimensional space</i>, then use their coordinates in that one-dimensional space to re-order the rows in the heatmap. For this test, I have adapted the RPM mapping (<a href="http://www.visumap.com/index.aspx?p=Resources/RpmOverview">Relational Perspective Map</a>) algorithm to reduce the dimensionality of the dataset from 50 to 1, then used the one-dimensional RPM map to order the rows in the heatmap. The following picture shows a result heatmap:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-sWYE6A8sDkE/UnKforXeoHI/AAAAAAAAJy0/3gzBhG9K4LM/s1600/SP500c.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-sWYE6A8sDkE/UnKforXeoHI/AAAAAAAAJy0/3gzBhG9K4LM/s320/SP500c.png" width="305" /></a></div><br />We notice this heatmap is much smoother than the previous two. In fact, we can see that the data rows gradually change from bright to dark color, as they go from the top to the bottom. More importantly, we don't see clear cluster structure among the rows, this is in clear contrary to the k-Mean algorithm that indicates 8 or 10 clusters. In this case, it is clear that k-Mean algorithm delivered the <i>wrong</i> artificial cluster information.<br /><br />Now, a few words about the implementation. The multidimensional sorting has been implemented with RPM algorithm, since RPM allows gradually shrinking the size of the map by starting with 2D or 3D map. This special feature enables RPM to search for optimal 1D map configuration among a large solution space. We could easily adapt other MDS algorithms to perform multidimensional sorting, but we probably will have to restrict solely on one-dimensional space with those algorithms. A few years ago, I have already blogged about this kind of <a href="http://jamesxli.blogspot.ca/2007/07/rpm-curled-space-and-dimensionality.html">dimensionality reduction by shrinking dimension with some simple illustrations</a>.<br /><br />Since high dimensional datasets are generally not sequentially ordered, there are in general not an unique way to order the data. What we want to do is just find a solution that preserves as much as possible similarity information with the order information. Thus, an important question arises here: How do we validate the sorting results? How do we compare two such sorting algorithms? A simple way to validate the sorting result is illustrated in the following picture:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-gVEwbS5do00/UnQ1xyYPOiI/AAAAAAAAJzE/KoQWBKoEKzc/s1600/MDSSortinge.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="304" src="http://2.bp.blogspot.com/-gVEwbS5do00/UnQ1xyYPOiI/AAAAAAAAJzE/KoQWBKoEKzc/s640/MDSSortinge.jpg" width="640" /></a></div><div class="separator" style="clear: both; text-align: left;">So, in order to test the sorting algorithm we first take simple photo and convert it to high dimensional dataset that represents the gray-scale levels of the photo. We then randomize the order of the rows in the data table. Then, we apply the sorting algorithm on the randomized data table, and check how good the original photo can be recovered. I have done this test for some normal photos, the multidimensonal sorting was able to recover pretty well the original photo (see also the short attached video), as long as enough time is given to the learning process.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">We have described an application of MDS algorithm for heatmap by using it to reduce data dimension to a single dimension. We might go a step further to use MDS to reduce data dimension to 0, so that data points will be mapped to a finite discrete points set. In this way, we would have archived the service of clustering algorithms. If this works, clustering algorithms can be considered a special case of MDS algorithms; and MDS algorithms might lead us to group of new clustering algorithms.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">The multidimsional sorting service has been implemented in VisuMap version 4.0.895 as integrated service. The following short video shows how to use this service for the sample datasets mentioned above.</div><div class="separator" style="clear: both; text-align: left;"><br /></div></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/88xX4cpEJEw?hd=1&vq=hd720" width="640"></iframe> <br /><br />A more practical application of MDS sorting is for microarray data analysis where heatmaps are often used to visualize the expression levels of a collection of <i>gens </i>for a collection of <i>samples</i>. Normally, neither the gens nor the samples are ordered according to their expression similarity, so that those heatmaps often appear rather random (even after applying additional grouping with hierarchical clustering algorithm.) The following picture shows, for instance, a heatmap for expressions of 12000 genes for about 200 samples:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-Pp89tDWyVSs/UpDBEkd_l-I/AAAAAAAAJ0s/6OcYlJ9ZrT0/s1600/MicroarrayHm.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="472" src="http://2.bp.blogspot.com/-Pp89tDWyVSs/UpDBEkd_l-I/AAAAAAAAJ0s/6OcYlJ9ZrT0/s640/MicroarrayHm.png" width="640" /></a></div><br />After applying MDS sorting both on the rows and columns of above heatmap, the heatmap looks as following:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-CW_OrOgZT0M/UpDBI94nheI/AAAAAAAAJ04/MNU8AqMoakY/s1600/MicroarrayHmSorted.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="475" src="http://1.bp.blogspot.com/-CW_OrOgZT0M/UpDBI94nheI/AAAAAAAAJ04/MNU8AqMoakY/s640/MicroarrayHmSorted.png" width="640" /></a></div><br />The above heatmap contains theoretically the same expression information. But, just by re-ordering the rows and columns, it shows more discernible structure and information for human perception.<br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-89242936544422122702013-11-01T07:09:00.001-07:002013-11-01T07:09:23.442-07:00VisuMap version 4.0.895 released<div dir="ltr" style="text-align: left;" trbidi="on">We have just released VisuMap version 4.0.895. The following is a list of major changes:<div><ul><li>Implemented the multidimensional sorting and on-fly mapping & clustering services.</li><li>Extended the RPM algorithms with dark-pressure mechanism.<strong><span style="font-weight: normal;"></span></strong></li><li><strong><span style="font-weight: normal;">Reduced memory usage of the application significantly, so that it can load much large dataset.</span></strong></li><li>Various defect fixing and enchantments</li></ul><div><br /></div></div></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-29663463203171954062013-10-19T11:07:00.001-07:002014-09-24T11:02:57.020-07:00On the shape of pictures<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on">This blog post is going to talk about a simple way to convert normal 2D pictures or photos to high dimensional datasets; and then use MDS tool to analyze those data. At the end, I'll mention some potential uses of this type of analysis for data from practical cases.<br /><br />Assuming that we have a picture that is represented by a M<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">x</span>N matrix of pixels, The pixel matrix can be simply converted to a matrix of real numbers by replacing each pixel with its intensity (or brightness) using the formula: a = 0.114*R + 0.587*G + 0.299*B; where R, G, B are the red, green and blue intensities of the pixel. We consider this M<span style="font-family: Arial, Helvetica, sans-serif; font-size: x-small;">x</span>N matrix as a N-dimensional dataset with M data points, where each data point is just a row vector in the matrix.<br /><br />Obtained such a high dimensional dataset for our picture, we can then visualize this dataset with various MDS methods. The following picture shows visualizations of a sample photo with three different MDS methods, namely, PCA, tSNE and RPM:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-AS8M38u6iZA/Ul7sQwnZzZI/AAAAAAAAJw4/ZJk4c7hUlVM/s1600/PicMDS1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="640" src="http://3.bp.blogspot.com/-AS8M38u6iZA/Ul7sQwnZzZI/AAAAAAAAJw4/ZJk4c7hUlVM/s640/PicMDS1.jpg" width="612" /></a></div>We see that all three methods reveal, more or less, a serial structure among the data points (i.e. rows in the pictures.) When using MDS methods to visualize high dimensional data, the first question we ask is often: Are the maps reliable? Asked differently, Do those geometrical shapes in maps show some nature of the initial dataset, or are those shapes just random result of those algorithms? To answer this question, I have run RPM and tSNE multiple times, all runs produced, more or less, similar maps (the PCA algorithm is deterministic, will therefore always produce the same 2D map.) The following picture shows two more maps produced by RPM and tSNE algorithms:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-gKd9QBmTxtg/Ul7xJeIp9-I/AAAAAAAAJxI/7b6tUy7rHyM/s1600/PicMDS2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="382" src="http://3.bp.blogspot.com/-gKd9QBmTxtg/Ul7xJeIp9-I/AAAAAAAAJxI/7b6tUy7rHyM/s640/PicMDS2.jpg" width="640" /></a></div>Comparing above two pictures we see that both RPM and tSNE algorithms produced repeatable results by different runs despite their non-deterministic nature. Going one step further, we could ask what kind maps we'll get when we alter the sample dataset slightly. To answer this question, I have created a RPM map and a tSNE map with half of the rows and half of the columns by chosen alternately each other row and column of the matrix (i.e. only with 1/4 of the data matrix). The following picture shows the result maps:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-Nr5hlx5BxQ4/Ul_w7qgsPSI/AAAAAAAAJx4/SO7zc2lYjD8/s1600/PicMDS2b.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="328" src="http://3.bp.blogspot.com/-Nr5hlx5BxQ4/Ul_w7qgsPSI/AAAAAAAAJx4/SO7zc2lYjD8/s1600/PicMDS2b.jpg" width="640" /></a></div><br />Above maps show apparent similarity with the maps in previous picture. Thus, we have here strong evidence that the geometrical characteristics in these maps correlate with characteristics in the initial sample picture. We can see such correlation more clearly when exam how regions of the map represent sections in the sample picture. The following picture shows how some sections of the tSNE map correspond to sections in the sample picture:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-XKOF5A5orCs/Ul72tunAjII/AAAAAAAAJxY/Gkrd44KwUYI/s1600/PicMDS3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="298" src="http://4.bp.blogspot.com/-XKOF5A5orCs/Ul72tunAjII/AAAAAAAAJxY/Gkrd44KwUYI/s640/PicMDS3.jpg" width="640" /></a></div><br />We notice that MDS maps topologically have simple sequential structure, so no information are embedded into the topological structure. MDS maps rather carry information through geometrical characteristics. For instance, a section of smooth curves; a large arch; a section of points with less density; etc.; Also, two distant sections winding together in particular way may indicate special feature of the underlying data. Furthermore, when we use 3D MDS maps, those <i>secondary structure </i>over the sequential structure may reveal much more rich complex information about the high dimensional data. Thus, MDS maps may offer new ways to describe and analyse the initial picture.<br /><br />We might ask here: why are the MDS maps of the picture sequential? To answer this question, let's image that we scan the picture from top to bottom, line by line, and store the line in a vector, then this vector will gradually change from one line to the next line. Thus, this line vector manifest a kind <i>random-walk</i> in the high dimensional space mostly with relatively small steps. Those random-walk alike datasets exist in many areas of data analysis. For instance, a while ago I have blogged about <a href="http://jamesxli.blogspot.ca/2010/10/principal-components-analysis-pca-of.html">one such case</a> where the state of the whole stock market has been considered as an random-walk process. We notice also that the MDS maps shown above resemble, more or less, those map in the blog <a href="http://jamesxli.blogspot.ca/2011/04/self-similarity-of-high-dimensional.html">self-similarity of high dimensional random-walk</a>.<br /><h3 class="post-title entry-title" style="background-color: white; color: #333333; font-family: Verdana, Arial, sans-serif; font-size: 16px; line-height: 1.1em; margin: 0px; padding: 0px;"></h3><div>The following picture shows the price history of S&P 500 components for a year, and the corresponding RPM map where each dot represents the state of the stock market at a day:</div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-d0GvIipY9qk/Ul8qZ1ZaFtI/AAAAAAAAJxo/DSPWZVpc43k/s1600/MDSPic4.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="307" src="http://1.bp.blogspot.com/-d0GvIipY9qk/Ul8qZ1ZaFtI/AAAAAAAAJxo/DSPWZVpc43k/s640/MDSPic4.jpg" width="640" /></a></div><div><br /></div>We notice the apparent similarity of the above RPM map with that produced for our sample picture previously. Thus, notions and methods developed for analyzing pictures may apply to a broader scope of data.<br /><br />One interesting question arises in view of above picture is: Can we produce 2D picture (maybe not as nice as the photo of Lena but with easier recognizable features) out from a high dimensional dataset like the S&P 500 dataset? The answer is pretty much positive, and I'll blog about this in another post where I will talk about about <i>multidimensional sorting</i>.<br /><br />At last, the sample picture and maps in this blog is produced by VisuMap version 4.0.895 and the sample dataset can be downloaded from the<a href="http://www.visumap.com/rc1.aspx?f=ShapeOfPictures.xvmz&g=res"> link here</a>. The following short video shows some basic steps to apply mentioned services:<br /><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/kubK87Zqy-c?hd=1&vq=hd720" width="640"></iframe> <br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-80592929181082185162013-05-30T10:20:00.000-07:002014-09-24T11:03:47.538-07:00Visual Data Cleansing with VisuMap<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on">I have just uploaded a short video (7min) that demonstrates how to do data cleansing visually with VisuMap. The sample dataset used in the video can be downloaded from <a href="http://www.visumap.net/index.aspx?p=Resources/VisuMapDatasets">here</a>.<br /><br /><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/Urt2JPgrxUQ?hd=1&vq=hd720" width="640"></iframe></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-52375659136578384642013-05-07T09:48:00.001-07:002014-09-24T11:04:16.628-07:00Sorting high dimensional data with RPM maps<div dir="ltr" style="text-align: left;" trbidi="on">I have recently <a href="http://jamesxli.blogspot.ca/2013/05/daily-price-of-s-companies-in-last-5.html">published a dataset</a> that contains the daily prices of the S&P 500 component companies for a period of 5 years. The <i>mountain view </i>of VisuMap provides a pretty good overview of the whole dataset in 3D styles. The following short video clips shows the normalized values (so that all price history start from 1.0) of the dataset:<br /><br /><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/Ih9EmUgXEYA?hd=1&vq=hd720" width="640"></iframe> <br /><br />We notice that the curves are colored using the k-Mean algorithm, so that stocks with similar development will have common colors. However, these curves are not ordered according to their group, but according to the alphabetic order of the stock tick, so that curves of different colors are all mixed with each other.<br /><br />A simple way to enhance the 3D view is just to re-order the curves, so that curves of the same group will be located together. We can do this easily with the heat-map view of VisuMap. However, this method does not reorder the groups in a meaningful way, so that closely located groups, are not necessarily similar groups.<br /><br />The RPM map provides a better way to sort high dimensional data according to similarities. To do this with VisuMap, we first create 3D RPM map for the dataset with very large width (e.g. 1200 pixels) but small height and depeth (e.g. 50 pixels), so that RPM map geometrically resembles a ring. Then we sort the data points according to the x-coordinates in the 3D RPM map ( that determines the data point's position on the ring.) The nature of the RPM algorithm will make sure that closely located data points on the RPM map will be similar to each other.<br /><br />The following short video shows how to do this with VisuMap. Notice that we have already created the 3D RPM map. To reorder the data points, we opened a table for the XYZ coordinates, then sorted the table on the X-coordinate column. The mountain view has been configured to automatically re-order its content when re-ordering-event occurred.<br /><br /><br /><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/hsK8tkz1F1k?hd=1&vq=hd720" width="640"></iframe> <br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-62660417074363005552013-05-03T17:05:00.002-07:002014-09-24T11:04:53.843-07:00Daily Price of S&P Companies in last 5 years.<div dir="ltr" style="text-align: left;" trbidi="on">I have just released a new VisuMap dataset that contains the daily prices of S&P 500 companies for the year 2008 to 2012. This dataset, available at <a href="http://www.visumap.com/index.aspx?p=Resources/VisuMapDatasets">our web site</a>, provides convenient access to these stock prices for that period. The dataset also contains scripts which download automatically those historical data from Google server.<br /><br />The following is short video to display these data in 3D mountain view:<br /><br /><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/u6qN-9JDm-A?hd=1&vq=hd720" width="640"></iframe> </div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-669514719549009382013-04-20T09:35:00.001-07:002014-09-24T11:05:12.032-07:00Visual interface kNN data clustering services in VisuMap<div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on">I have just uploaded a short video tutorial (ca. 10 minutes) that demonstrates the visual interface for the kNN (k-nearest neighbors) data clustering service in VisuMap version 4.0.892. The tutorial shows, with two scenarios, how to combine kNN with supervised and unsupervised classification methods.<br /><br />Although kNN has been one of the simplest and most widely used classification algorithm, there is no software on the market that provides kNN services with decent visual interface. VisuMap implemented a unique visual interface for kNN with the help of MDS mapping and linked data views.<br /><br /><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/-kL5wCViR6s?hd=1&vq=hd720" width="640"></iframe></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-58633153672924066312013-04-18T17:02:00.002-07:002014-09-24T11:05:41.426-07:00Azhimutal projection for RPM on projective spaces<div dir="ltr" style="text-align: left;" trbidi="on">Among those manifolds used by relational perspective map (RPM), the 2-dimensional real projective plane (P2) has been one of my favorite space. As a visualization space for high dimensional data, P2 works similarly as the 2-dimensional flat torus (T2) used initially by RPM. Both T2 and P2 provide a boundaryless view of data by connecting the opposite sides of the space. Both spaces are completely isometrics and shifting/rotation symmetrical.<br /><br />P2 has, however, one advantage over T2 in that P2 is isotropic but not T2. This means that P2 is equivalent in all directions, but T2 treats different directions differently. For instance, the diagonal directions of T2 usually aligns with the larger portion of the data as I blogged <a href="http://jamesxli.blogspot.ca/2008/04/villarceau-circles-and-diagonal.html">about this before</a>. P2 doesn't have such space specific artifacts: P2 maps are usually symmetric in all direction and can be rotated freely.<br /><br />On the other side, T2 do have a significant advantage over P2 as it a flat but not P2: T2 maps are represented as rectangular maps whereas P2 maps are maps on the semi-sphere. This makes P2 maps harder to explore on conventional flat media like paper or computer screen. VisuMap has implemented a special <i>sphere view</i> to facilitate the exploration of spherical maps, but the rectangular T2 maps are still fare more easier to explore than the sphere view.<br /><br />The problem to represent spherical surface through flat map is, of course, a quite old problem. A plethera of methods have been invented in the past to create flat maps for the spherical earth surface. Looking in to method list at the page <a href="http://en.wikipedia.org/wiki/Map_projection#Azimuthal_.28projections_onto_a_plane.29">Map Projection</a>, I have picked the <b>Azhimutal projection</b> to project the semi-spherical P2 maps to the flat plane. This additional projection has been implemented in the latest version of VisuMap 4.0.892 with which you can make disk-like snapshot of a spherical P2 map in the sphere view. The following picture illustrates how VisuMap maps data from high dimensional space to P2 and then to the flat plane with the Azhimutal projection:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-iDIzS9QYRH8/UXBuKmUd63I/AAAAAAAAJHU/3Bn_6bTPMLI/s1600/AzhimutalProjection.PNG" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="188" src="http://2.bp.blogspot.com/-iDIzS9QYRH8/UXBuKmUd63I/AAAAAAAAJHU/3Bn_6bTPMLI/s1600/AzhimutalProjection.PNG" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">We notice that the Azhimutal projection is a non-linear project, it will stretch the area a the boundary of the semi-sphere somewhat. This kind of non-linear stretch will mathematically induce non-zero curvature on the flat disk, so that the total curvature of the stretched disk equals that of the semi-sphere (as the Gauss-Bonnet theorem dictates.) The following short screen-cast shows how VisuMap maps the full sphere on the the semi-spherical P2 space. The RPM algorithm will split the sphere into 4 pieces. Notice how these fragments stretch when rotating them from center to the boundary.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/Psc3Iimkhys?hd=1&vq=hd720" width="640"></iframe> <br /><br />Going one step further, we have extend the Azhimutal projection to project 3-dimensional projective space (P3) to the 3D flat space where P3 is realized as a semi-sphere in 4-dimensional space. In VisuMap, such a 3D map is called a projective ball (where the opposite points on the surface have been considered stuck together.) The following screen-cast shows how VisuMap creates RPM-3P map for the S&P 500 dataset:<br /><br /><iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/pqTk2t8sH3A?hd=1&vq=hd720" width="640"></iframe> <br />Notice that the RPM 3P actually resides in the 4D space. We have developed a special view that in addition to rotation among the first three dimensions, also allows rotation between the first 3 dimensions and the forth dimension (active when pressing-and-holding the control-key.)<br /><br /><br /><br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-83454603800802664232013-03-16T14:17:00.001-07:002014-04-15T07:37:25.803-07:00On the weighted Hamming distance<div dir="ltr" style="text-align: left;" trbidi="on"><a href="http://en.wikipedia.org/wiki/Hamming_distance">Hamming distance</a> is a simple distance function to impose a metrical structure over discrete non-numerical data. Hamming distance has been widely used to cluster or visualize discrete data. In VisuMap Hamming distance has been so fare the only distance function for discrete date. In this note I'll introduce the weighted Hamming distance (WHD) as an extension to the Hamming distance. After a brief review of the original Hamming distance, I'll describe WHD and demonstrate its strength with two sample datasets.<br /><br />Let's first consider the one-dimensional case. Let Ω be a set of data points potentially with duplicates. Ω may represent, for instance, the hair color or blood type of a group of patients. The Hamming distance between two data points <em>x</em> and <em>y</em> is simply defined as:<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-DWkZVv36vFU/U01D5VQs4WI/AAAAAAAAKRY/Lu8vMIULc8I/s1600/HammingDistance.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-DWkZVv36vFU/U01D5VQs4WI/AAAAAAAAKRY/Lu8vMIULc8I/s1600/HammingDistance.png" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div>The weighted Hamming distance (WHD) is defined as:<br /><div class="separator" style="clear: both; text-align: center;"></div><div class="separator" style="clear: both; text-align: center;"><a href="http://4.bp.blogspot.com/-ZW4MfLP2RMw/U01ECqb7bMI/AAAAAAAAKRg/bRfMBdUeS8A/s1600/WeightedHamming.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://4.bp.blogspot.com/-ZW4MfLP2RMw/U01ECqb7bMI/AAAAAAAAKRg/bRfMBdUeS8A/s1600/WeightedHamming.png" /></a></div><div class="separator" style="clear: both; text-align: center;"><span style="text-align: left;">where </span><em style="text-align: left;">p</em><span style="text-align: left;">(</span><em style="text-align: left;">x</em><span style="text-align: left;">) is the frequency of the value </span><em style="text-align: left;">x</em><span style="text-align: left;"> in Ω. |Ω| is the number of data points in Ω. </span><i style="text-align: left;">w</i><span style="text-align: left;">(</span><em style="text-align: left;">x,y</em><span style="text-align: left;">) is an extension of </span><i style="text-align: left;">d</i><span style="text-align: left;">(</span><i style="text-align: left;">x</i><span style="text-align: left;">,</span><i style="text-align: left;">y</i><span style="text-align: left;">) in the sense that </span><i style="text-align: left;">d</i><span style="text-align: left;">(</span><i style="text-align: left;">x</i><span style="text-align: left;">, </span><i style="text-align: left;">y</i><span style="text-align: left;">) < </span><i style="text-align: left;">d</i><span style="text-align: left;">(</span><i style="text-align: left;">x</i><span style="text-align: left;">', </span><i style="text-align: left;">y</i><span style="text-align: left;">') always leads to </span><i style="text-align: left;">w</i><span style="text-align: left;">(</span><i style="text-align: left;">x</i><span style="text-align: left;">, </span><i style="text-align: left;">y</i><span style="text-align: left;">) < </span><i style="text-align: left;">w</i><span style="text-align: left;">(</span><i style="text-align: left;">x</i><span style="text-align: left;">',</span><i style="text-align: left;">y</i><span style="text-align: left;">'). On the other side, </span><i style="text-align: left;">w</i><span style="text-align: left;">(,) is actually not a distance function in strict terms of mathematics, since </span><i style="text-align: left;">w</i><span style="text-align: left;">(,) does not fulfill the requirement </span><i style="text-align: left;">w</i><span style="text-align: left;">(</span><i style="text-align: left;">x,x</i><span style="text-align: left;">) ≡ 0.</span></div><br />We notice that <i>d</i>(,) takes the value 0 and 1, whereas <i>w</i>(,) can take any value from 1 to |Ω|. <i>d</i>(,) only captures the information about equality, whereas <i>w</i>(,) captures, in addition to the equality, also the frequency of each different discrete value. In particular <i>w</i>(<i>x</i>,<em>y</em>) yields smaller distance if the value <i>x</i> is infrequent in Ω. In terms of our example with patients as data points, <i>w</i>(,) will ascribe larger closeness (i.e. smaller distance) to a pair of patients if they share a rare property values, say a rare blood type. This property of WHD make it to focus more on rare property values.<br /><br />In multidimensional case, our data points will be assumed to be from a multidimensional space <b><i>x </i></b>= (x<sub>1</sub>, x<sub>2</sub>...x<sub>n</sub>)∈Ω<sub>1</sub>×Ω<sub>2</sub>×...×Ω<sub>n</sub>; Let <b><i>y = </i></b>(y<sub>1</sub>, y<sub>2</sub>...y<sub>n</sub>) be a second the data point; and let <i>w</i><sub>k </sub>be the weighted hamming distance between <i>x</i><sub>k</sub> and <i>y</i><sub>k</sub>; then we define the multidimensional WHD between <b><i>x</i></b> and <b><i>y</i></b> as:<br /><div class="separator" style="clear: both; text-align: center;"><a href="http://2.bp.blogspot.com/-3Wg2BRArThI/UUOaawBKkUI/AAAAAAAAJFs/Xawa-V3qdLs/s1600/WeightedHamming2.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://2.bp.blogspot.com/-3Wg2BRArThI/UUOaawBKkUI/AAAAAAAAJFs/Xawa-V3qdLs/s1600/WeightedHamming2.png" /></a></div>We notice that the multidimensional WHD is, up to a constant factor, the <i><a href="http://en.wikipedia.org/wiki/Harmonic_mean">harmonic mean</a></i> of weighted hamming distance of each individual dimension. The harmonic mean, like the normal arithmetical mean, yields a number between the minimal and maximal value in {<i>w</i><sub>1</sub>, <i>w</i><sub>2</sub>, ..., <i>w</i><sub>3</sub>}, except that the harmonic mean is normally shifted towards the minimal value. The use of harmonic mean instead of the squared sum (as used by Euclidean distance) makes w(,) bias towards smaller values in the set {<i>w</i><sub>1</sub>, <i>w</i><sub>2</sub>, ..., <i>w</i><sub>n</sub>}. This means, in terms of our example with patient data, we ascribe more importance to those properties which show larger proximity between data points. For instance, if the hair color property are more or less randomly distributed among the patients, but some specially blood types occur only rarely, then the blood type property will be more importance with respect to the harmonic mean.<br /><br />To see the difference between WHD and normal hamming distance we first consider the sample dataset <a href="http://archive.ics.uci.edu/ml/datasets/Adult">Adult Database</a> in the UCI Machine-Learning repository that collects information about students. For the sake of simplicity I just use two properties, <i>education </i>and <i>race</i>, to generate MDS maps. The following are the MDS maps generated with <a href="http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction#Curvilinear_component_analysis">CCA </a>mapping algorithm with WHD and normal hamming distance, on the left and right side respectively:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-pFFRxHDdGEA/UUOfHbBMBwI/AAAAAAAAJF0/1Jbysl5eKC8/s1600/WeightedHamming3a.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-pFFRxHDdGEA/UUOfHbBMBwI/AAAAAAAAJF0/1Jbysl5eKC8/s640/WeightedHamming3a.png" height="352" width="640" /></a></div><br />In above picture we can see clearly in the left map that three large clusters occupy the whole data space, whereas the left map only show a cloud with discrete data points without visual information about the cluster sizes. <br /><br />As second example we consider a dataset in <a href="http://jamesxli.blogspot.ca/2010_07_01_archive.html">Haplotype Analysis</a>. This dataset contains about 2000 data points each comprises the haplotypes of about 160 SNPs. The following picture shows the MDS maps generated with the <a href="http://jamesxli.blogspot.ca/2008/08/new-mapping-algorithm-t-sne.html">tSNE </a>algorithm with WHD and normal Hamming distance, on the left and right side respectively:<br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="http://3.bp.blogspot.com/-RXHQQwuHX60/UUOfKVfdi3I/AAAAAAAAJF8/wG4YKnRcAPw/s1600/WeightedHamming3b.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://3.bp.blogspot.com/-RXHQQwuHX60/UUOfKVfdi3I/AAAAAAAAJF8/wG4YKnRcAPw/s640/WeightedHamming3b.png" height="334" width="640" /></a></div><br />We notice that map on the left side reveals about 8 clusters with different sizes, whereas the map on the right side (generated with Hamming distance) shows four, more or less, symmetrical clusters. The map generated with WHD shows more structure and complexity, but is up to the scientist to verify the relevance of those extra structural information.<br /><br />Weighted hamming distance has been implemented in VisuMap 4.0 as a new metric for enumerated data. The above sample maps are generated with VisuMap version 4.0.891.<br /></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0tag:blogger.com,1999:blog-710383708872670121.post-38340625128227873462013-03-08T09:03:00.002-08:002013-03-08T09:03:18.422-08:00VisuMap 4.0 Released<div dir="ltr" style="text-align: left;" trbidi="on">We have just released VisuMap 4.0 with new price list for our software products and services. This release is a major upgrade from its previous release v3.5 (released about two years ago). Main changes in this release include:<br /><br /><ul style="text-align: left;"><li>Upgraded the .Net Runtime library from version 2.0 to 4.0. Some new features in the .Net 4.0 significantly simplified internal implementation and most likely also improved the performance and stability. For example, the support for generic covariant list greatly speeded up a lot data processing.<br /></li><li>Implemented new services to analyze tables with non-numerical data.. We have for instance introduced the weighted-hamming distance to geometrize multidimensional enumerate data. These features manifest our R&D in the past year that moved gradually towards analysis of non-numerical data. We'll talk about this simple and interesting metric in latter blogs.<br /></li><li>We have increased the price for our products and services for 10% to 15%. This is the first price increase in last two years. </li></ul><div>For more information about VisuMap 4.0 please see the "What's new" section in the product's on-line help and our web site.</div><div><br /></div><div><br /></div></div>James X. Lihttps://plus.google.com/111106647779200471510noreply@blogger.com0