<?xml version="1.0" encoding="UTF-8" standalone="no"?><?xml-stylesheet href="http://www.blogger.com/styles/atom.css" type="text/css"?><rss xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" version="2.0"><channel><title>Engineering Insights                           </title><description>Get an Insight into the world of engineering. Get to know about the trending topics in the field of engineering.</description><managingEditor>noreply@blogger.com (Manu Reghukumar)</managingEditor><pubDate>Wed, 28 Aug 2024 14:06:49 +0530</pubDate><generator>Blogger http://www.blogger.com</generator><openSearch:totalResults xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">90</openSearch:totalResults><openSearch:startIndex xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">1</openSearch:startIndex><openSearch:itemsPerPage xmlns:openSearch="http://a9.com/-/spec/opensearchrss/1.0/">25</openSearch:itemsPerPage><link>https://www.engineeringinsights.in/</link><language>en-us</language><itunes:explicit>no</itunes:explicit><itunes:subtitle>Get an Insight into the world of engineering. Get to know about the trending topics in the field of engineering.</itunes:subtitle><itunes:category text="Technology"><itunes:category text="Tech News"/></itunes:category><itunes:category text="Education"><itunes:category text="Training"/></itunes:category><itunes:category text="Society &amp; Culture"/><itunes:owner><itunes:email>insightsonengineering@gmail.com</itunes:email></itunes:owner><item><title>This robot will check your vital signs</title><link>https://www.engineeringinsights.in/2021/12/this-robot-will-check-your-vital-signs.html</link><category>Medical Technology</category><category>Robotics</category><category>Trending Technologies</category><pubDate>Wed, 8 Dec 2021 06:54:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-6366615196545986664</guid><description>&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDo2Axmg-jTIS-fSQPokVjVqSl5oW0kiAi4SpLoSObgDl63cztnvd6WIrZf56ydPSOIhP2YNNxUEekdQ6mUPBAYIy7IxKJYWMHp8fkXP8p8OuKXp-XgfuxpocwRIOTiT_XTLs62Vcd1u2u/" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img data-original-height="402" data-original-width="768" height="336" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDo2Axmg-jTIS-fSQPokVjVqSl5oW0kiAi4SpLoSObgDl63cztnvd6WIrZf56ydPSOIhP2YNNxUEekdQ6mUPBAYIy7IxKJYWMHp8fkXP8p8OuKXp-XgfuxpocwRIOTiT_XTLs62Vcd1u2u/w640-h336/image.png" width="640" /&gt;&lt;/a&gt;&lt;span style="text-align: left;"&gt;Spot, the robot-dog, has been used in a hospital trial to triage potential Covid-19 patients. The intriguing thing is that patients seemed to be at ease with it. Image credit: Boston Dynamics&lt;/span&gt;&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Boston Dynamics Spot robot looks like a dog (at least like a mechanical dog) and was designed to operate in a variety of complex environment, with complex meaning uneven pavement, cluttered with “stuff” (including people). This is pretty challenging for a robot, it requires both the awareness on what is going on around it and the capability to move around avoiding obstacles, taking stairs, even jumping over a barrier.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Spot is pretty good at this (watch the clip), it can walk on slippery pavements, go up a flight of stairs, find its way around obstacles and avoid bumping into people.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;If you ever stop to consider the inside of an hospital you can easily perceive how complex it is to move around: carts left in the way, nurses dashing here and there,… Just the kind of place where a robot would have a challenging time.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;That’s exactly the place where Spot might shine! A joint team from MIT, Boston Dynamics and Brigham and Women’s Hospital has set out to test both Spot capabilities in a hospital context and its acceptance by people (patients, medical staff and visitors). And not just to see how it can blend in, rather to test how much it can help.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;They have equipped Spot with sensors to check patients and in-patients vital signs, have placed a tablet where a dog head is and use the screen and the camera to have medical staff communicating with patients. One application tested was in the triage of incoming patients to assess Covid-19 infection, thus cutting down potentially dangerous exposure.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Spot is equipped with 4 video cameras, temperature sensors able to detect pulse, breathing and blood oxygen saturation from as far as 2 meter from the patient. This distance is important to avoid contamination (although Spot gets frequently exposed to ultraviolet rays sterilising it).&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Additionally, the Spot-doctor/nurse can move around the hospitals making rounds to check vital signs, including the look of patients, with an AI based image recognition software that can spot visual signs of problems by looking at a patient’s face.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Interestingly, Spot proved helpful, and that was easy to predict, and it seems that people accepted its presence with patients looking forward to its round and even attempting some chatting. It might be, as researchers were ready to admit, that its acceptance in the hospital was also fostered by the difficult times we are going through, where help, any help, is welcome.&lt;/p&gt;&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
&lt;iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="349" src="https://www.youtube.com/embed/0YvSdbwh41I" title="YouTube video player" width="620"&gt;&lt;/iframe&gt;&lt;div&gt;&lt;br /&gt;&lt;/div&gt;&lt;div&gt;Read the original article &lt;a href="https://cmte.ieee.org/futuredirections/2021/08/19/this-robot-will-check-your-vital-signs/" rel="nofollow" target="_blank"&gt;here&lt;/a&gt;&lt;/div&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDo2Axmg-jTIS-fSQPokVjVqSl5oW0kiAi4SpLoSObgDl63cztnvd6WIrZf56ydPSOIhP2YNNxUEekdQ6mUPBAYIy7IxKJYWMHp8fkXP8p8OuKXp-XgfuxpocwRIOTiT_XTLs62Vcd1u2u/s72-w640-h336-c/image.png" width="72"/><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>How OEMs and Others Can Evaluate Field Service Management Technology</title><link>https://www.engineeringinsights.in/2021/11/how-oems-and-others-can-evaluate-field.html</link><category>Automation</category><category>Trending Technologies</category><pubDate>Tue, 2 Nov 2021 18:36:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-7760754318610112073</guid><description>&lt;p&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: justify;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/a/AVvXsEh3UHHc3sMG7V6BV2NVY4ZL8i_nYx5FNqmGttk7g07Jl3L9EX0AKOHHRFV1P_QGVzUkZ-MiBOwted4HDyHnf4c78jFZ_0rSEFmdMo3B_nddWd2QY4EGGCf8xI36HB13S_ITUIlahXLz7-hrWqp1t4WfjqLQoz5ACT8MvDlzK-OUPZtjVOZV18LzOdgApg=s500" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="313" data-original-width="500" height="400" src="https://blogger.googleusercontent.com/img/a/AVvXsEh3UHHc3sMG7V6BV2NVY4ZL8i_nYx5FNqmGttk7g07Jl3L9EX0AKOHHRFV1P_QGVzUkZ-MiBOwted4HDyHnf4c78jFZ_0rSEFmdMo3B_nddWd2QY4EGGCf8xI36HB13S_ITUIlahXLz7-hrWqp1t4WfjqLQoz5ACT8MvDlzK-OUPZtjVOZV18LzOdgApg=w640-h400" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;The field service market lies at the cross section of customer service and support software. Providers are responsible for dispatching technicians to remote locations to provide installation, repair or maintenance services for equipment or systems. Field service management (FSM) technology helps providers manage and monitor owned and customer assets to deliver business outcomes and seamless customer experiences. When evaluating FSM technology, assess the following criteria to make an informed decision about your service transformation partner.&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;span style="font-size: medium;"&gt;&lt;b&gt;Look for consistent growth&lt;/b&gt;&lt;/span&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;As more business and consumer commerce migrates online and the field service industry navigates labor shortages, it is important to review a FSM company’s growth. A technology partner who is continually expanding their offerings and market footprint will better serve customers down the line. The seamless integration of management technology into an organisation’s customer relationship management (CRM) system and other backend programs is necessary for optimal workflows, but can cause high entry barriers, making it more cost-effective and productive to integrate the right system the first time.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;When evaluating service providers, the speed of revenue generation offers insight into growth rates by year. Additionally, gauging the market verticals a partner serves can offer a view into the scope of a provider’s portfolio, which can be helpful in determining if they can serve industry objectives.&lt;/p&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="font-size: medium;"&gt;&lt;b&gt;Assess the subcontractor ecosystem&lt;/b&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p style="text-align: justify;"&gt;The key to productive and effective field service is flexibility. Many providers must offer the ability to cover various regions at off-peak hours and service a myriad of job requests that vary in skill level. When combined with the industry’s continually ageing workforce, it is important that field service management companies can call on blended workforces and integrate quality contractors into their staff. When choosing a FSM partner, it is essential to work with providers that possess the comprehensive functionality to support the intelligent management of blended workforces, contractor onboarding, schedule optimisation and a network of available services that can be called upon for certain industries and geographies.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Additionally, a good FSM partner will not only coordinate their workforce but inform and enable technicians to provide the best service. Mobile applications and devices offer GPS tracking, telematics, knowledge management integration and work instruction management. Organizations that provide remote expert guidance for technicians and customers in the field through remote video and augmented reality (AR)-based communications systems will keep pace with technology and outlast competitors.&lt;/p&gt;&lt;h3 style="text-align: justify;"&gt;&lt;span style="font-size: medium;"&gt;&lt;b&gt;Evaluate the product line&lt;/b&gt;&lt;/span&gt;&lt;/h3&gt;&lt;p style="text-align: justify;"&gt;Field service management products operate across multiple channels to provide holistic communication to original equipment manufacturers (OEMs), dispatchers, technicians and customers. Evaluating the digital product offerings will give companies an idea if a technology partner can provide end-to-end service and integrate well into established business practices. A quality FSM partner can tailor its products, integration packaging and template configurations to different sizes of customer, different industries and different workforce compositions.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;A strong and varied product line will offer websites, supply chain solutions, third-party service-brokering solutions and analytics that will handle customer relationship data, leverage on IoT integration and offer workforce, vendor and product lifecycle management to supply superior service throughout the customer journey.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;The recently published Gartner Magic Quadrant report for Field Service Management shares the latest market and consumer trends affecting the service management landscape and assesses the value of leading field service management companies.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;&lt;i&gt;&lt;span style="font-size: xx-small;"&gt;This article was originally published on automation.com&lt;/span&gt;&lt;/i&gt;&lt;/p&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/a/AVvXsEh3UHHc3sMG7V6BV2NVY4ZL8i_nYx5FNqmGttk7g07Jl3L9EX0AKOHHRFV1P_QGVzUkZ-MiBOwted4HDyHnf4c78jFZ_0rSEFmdMo3B_nddWd2QY4EGGCf8xI36HB13S_ITUIlahXLz7-hrWqp1t4WfjqLQoz5ACT8MvDlzK-OUPZtjVOZV18LzOdgApg=s72-w640-h400-c" width="72"/><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>Toward next-generation brain-computer interface systems</title><link>https://www.engineeringinsights.in/2021/09/toward-next-generation-brain-computer.html</link><category>Artificial intelligence</category><category>Biomedical</category><category>Biomimics</category><category>Trending Technologies</category><pubDate>Sun, 12 Sep 2021 06:26:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-396619582875483425</guid><description>&lt;h1 style="text-align: left;"&gt;&lt;p class="p1" style="font-family: Arial; font-size: 18px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;Brain-computer interfaces (BCIs) are emerging assistive devices that may one day help people with brain or spinal injuries to move or communicate. BCI systems depend on implantable sensors that record electrical signals in the brain and use those signals to drive external devices like computers or robotic prosthetics.&lt;/p&gt;&lt;p class="lead" id="first" style="background-color: white; border-radius: 0px !important; box-sizing: border-box; line-height: 1.4; margin: 0px 0px 20px;"&gt;&lt;/p&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg59Hfwxt8uYt8DGGxAIj0eNlUKFIUgXMT4exAXe7F4oj-68IGhdnWRKEN_2Y0l_gVKJmzV5kA6RKOoViKTbcLDmoRWLyLidNlzkw7fq-Me8mMiAkxt3rfI_c-0vE73m7e8PtnJFaYPMm3C/" style="font-family: &amp;quot;Helvetica Neue&amp;quot;, Helvetica, Arial, sans-serif; font-size: 18px; font-weight: 300; margin-left: 1em; margin-right: 1em;"&gt;&lt;span style="color: black;"&gt;&lt;img data-original-height="360" data-original-width="360" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg59Hfwxt8uYt8DGGxAIj0eNlUKFIUgXMT4exAXe7F4oj-68IGhdnWRKEN_2Y0l_gVKJmzV5kA6RKOoViKTbcLDmoRWLyLidNlzkw7fq-Me8mMiAkxt3rfI_c-0vE73m7e8PtnJFaYPMm3C/s16000/image.png" /&gt;&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;p class="p1" style="font-family: Arial; font-size: 9px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px;"&gt;&lt;span class="s1" style="text-decoration-line: underline;"&gt;Abstract concept illustrating brain-computer interface (stock image).&amp;nbsp;Credit: © Dana.S / stock.adobe.com&lt;/span&gt;&lt;/p&gt;&lt;/div&gt;&lt;div id="text" style="background-color: white; border-radius: 0px !important; box-sizing: border-box;"&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;Most current BCI systems use one or two sensors to sample up to a few hundred neurons, but neuroscientists are interested in systems that are able to gather data from much larger groups of brain cells.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;Now, a team of researchers has taken a key step toward a new concept for a future BCI system -- one that employs a coordinated network of independent, wireless microscale neural sensors, each about the size of a grain of salt, to record and stimulate brain activity. The sensors, dubbed "neurograins," independently record the electrical pulses made by firing neurons and send the signals wirelessly to a central hub, which coordinates and processes the signals.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;In a study published on August 12 in Nature Electronics, the research team demonstrated the use of nearly 50 such autonomous neurograins to record neural activity in a rodent.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;The results, the researchers say, are a step toward a system that could one day enable the recording of brain signals in unprecedented detail, leading to new insights into how the brain works and new therapies for people with brain or spinal injuries.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;"One of the big challenges in the field of brain-computer interfaces is engineering ways of probing as many points in the brain as possible," said Arto Nurmikko, a professor in Brown's School of Engineering and the study's senior author. "Up to now, most BCIs have been monolithic devices -- a bit like little beds of needles. Our team's idea was to break up that monolith into tiny sensors that could be distributed across the cerebral cortex. That's what we've been able to demonstrate here."&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;The team, which includes experts from Brown, Baylor University, University of California at San Diego and Qualcomm, began the work of developing the system about four years ago. The challenge was two-fold, said Nurmikko, who is affiliated with Brown's Carney Institute for Brain Science. The first part required shrinking the complex electronics involved in detecting, amplifying and transmitting neural signals into the tiny silicon neurograin chips. The team first designed and simulated the electronics on a computer, and went through several fabrication iterations to develop operational chips.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;The second challenge was developing the body-external communications hub that receives signals from those tiny chips. The device is a thin patch, about the size of a thumb print, that attaches to the scalp outside the skull. It works like a miniature cellular phone tower, employing a network protocol to coordinate the signals from the neurograins, each of which has its own network address. The patch also supplies power wirelessly to the neurograins, which are designed to operate using a minimal amount of electricity.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;"This work was a true multidisciplinary challenge," said Jihun Lee, a postdoctoral researcher at Brown and the study's lead author. "We had to bring together expertise in electromagnetics, radio frequency communication, circuit design, fabrication and neuroscience to design and operate the neurograin system."&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;The goal of this new study was to demonstrate that the system could record neural signals from a living brain -- in this case, the brain of a rodent. The team placed 48 neurograins on the animal's cerebral cortex, the outer layer of the brain, and successfully recorded characteristic neural signals associated with spontaneous brain activity.&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;The team also tested the devices' ability to stimulate the brain as well as record from it. Stimulation is done with tiny electrical pulses that can activate neural activity. The stimulation is driven by the same hub that coordinates neural recording and could one day restore brain function lost to illness or injury, researchers hope.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;The size of the animal's brain limited the team to 48 neurograins for this study, but the data suggest that the current configuration of the system could support up to 770. Ultimately, the team envisions scaling up to many thousands of neurograins, which would provide a currently unattainable picture of brain activity.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;"It was a challenging endeavor, as the system demands simultaneous wireless power transfer and networking at the mega-bit-per-second rate, and this has to be accomplished under extremely tight silicon area and power constraints," said Vincent Leung, an associate professor in the Department of Electrical and Computer Engineering at Baylor. "Our team pushed the envelope for distributed neural implants."&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;There's much more work to be done to make that complete system a reality, but researchers said this study represents a key step in that direction.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;"Our hope is that we can ultimately develop a system that provides new scientific insights into the brain and new therapies that can help people affected by devastating injuries," Nurmikko said.&lt;/p&gt;&lt;p class="p2" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; min-height: 16px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;Other co-authors on the research were Ah-Hyoung Lee (Brown), Jiannan Huang (UCSD), Peter Asbeck (UCSD), Patrick P. Mercier (UCSD), Stephen Shellhammer (Qualcomm), Lawrence Larson (Brown) and Farah Laiwalla (Brown). The research was supported by the Defense Advanced Research Projects Agency (N66001-17-C-4013).&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px; text-align: justify;"&gt;&lt;br /&gt;&lt;/p&gt;&lt;/div&gt;&lt;/div&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px 0px 10px; text-align: justify;"&gt;Materials provided by Brown University. Note: Content may be edited for style and length.&lt;/p&gt;&lt;p class="p1" style="font-family: Arial; font-size: 14px; font-stretch: normal; font-variant-east-asian: normal; font-variant-numeric: normal; font-weight: 400; line-height: normal; margin: 0px 0px 10px; text-align: justify;"&gt;Brown University. "Toward next-generation brain-computer interface systems." ScienceDaily. ScienceDaily, 12 August 2021. &amp;lt;www.sciencedaily.com/releases/2021/08/210812135910.htm&amp;gt;.&lt;/p&gt;&lt;/div&gt;&lt;/h1&gt;&lt;div&gt;&lt;p style="background-attachment: initial; background-clip: initial; background-image: initial; background-origin: initial; background-position: initial; background-repeat: initial; background-size: initial; color: #0e101a; margin-bottom: 0pt; margin-top: 0pt;"&gt;&lt;/p&gt;&lt;/div&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg59Hfwxt8uYt8DGGxAIj0eNlUKFIUgXMT4exAXe7F4oj-68IGhdnWRKEN_2Y0l_gVKJmzV5kA6RKOoViKTbcLDmoRWLyLidNlzkw7fq-Me8mMiAkxt3rfI_c-0vE73m7e8PtnJFaYPMm3C/s72-c/image.png" width="72"/><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>The Double-Diamond Model of Design</title><link>https://www.engineeringinsights.in/2021/06/the-double-diamond-model-of-design.html</link><category>Engineering</category><category>Home</category><category>Professional Resources</category><pubDate>Sat, 19 Jun 2021 04:31:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-5751098623613338714</guid><description>&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;"&gt;&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"&gt;&lt;span style="white-space: pre-wrap;"&gt;Designers often start by questioning the problem given to them: they expand the scope of the problem, diverging to examine all the fundamental issues that underlie it. Then they converge upon a single problem statement. During the solution phase of their studies, they first expand the space of possible solutions, the divergence phase. Finally, they converge upon a proposed solution (Figure 6.1). This double diverge-converge pattern was first introduced in 2005 by the British Design Council, which called it the double-diamond design process model.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;img height="327" src="https://lh5.googleusercontent.com/SDtatj0O93WS3B2v6UK7HL93XvDHzu_fFsN1XU53ofNuvHNEckdIBazy--x6LT2tTPAdGe2VOuV_7sLIr8W-z8z8Ats7El23u9swUfCUS3gmKCETBYaQx4-vsBcAr8xwaMhhWSA-" style="margin-left: auto; margin-right: auto; margin-top: 0px;" width="367" /&gt;&lt;/td&gt;&lt;/tr&gt;&lt;tr&gt;&lt;td class="tr-caption" style="text-align: justify;"&gt;&lt;i&gt;The Double-Diamond Model of Design. Start with an idea, and through the initial design research, expand the thinking to explore the fundamental issues. Only then is it time to converge upon the real, underlying problem. Similarly, use design research tools to explore a wide variety of solutions before converging upon one. (Slightly modified from the work of the British Design Council, 2005.)&lt;/i&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/tbody&gt;&lt;/table&gt;&lt;span style="white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt; text-align: justify;"&gt;&lt;div style="text-align: justify;"&gt;The Design Council divided the design process into four stages: “discover” and “define”—for the divergence and convergence phases of finding the right problem, and “develop” and “deliver”—for the divergence and convergence phases of finding the right solution. The double diverge-converge process is quite effective at freeing designers from unnecessary restrictions to the problem and solution spaces. But you can sympathize with a product manager who, having given the designers a problem to solve, finds them questioning the assignment and insisting on travelling all over the world to seek deeper understanding. Even when the designers start focusing upon the problem, they do not seem to make progress, but instead develop a wide variety of ideas and thoughts, many only half-formed, many clearly impractical. All this can be rather unsettling to the product manager who, concerned about meeting the schedule, wants to see immediate convergence.&amp;nbsp;&lt;/div&gt;&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"&gt;&lt;span style="white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"&gt;&lt;span style="white-space: pre-wrap;"&gt;To add to the frustration of the product manager, as the designers start to converge upon a solution, they may realize that they have inappropriately formulated the problem, so the entire process must be repeated (although it can go more quickly this time). This repeated divergence and convergence is important in properly determining the right problem to be solved and then the best way to solve it. It looks chaotic and ill-structured, but it actually follows well-established principles and procedures. How does the product manager keep the entire team on schedule despite the apparently random and divergent methods of designers? Encourage their free exploration, but hold them to the schedule (and budget) constraints. There is nothing like a firm deadline to get creative minds to reach convergence.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"&gt;&lt;span style="white-space: pre-wrap;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"&gt;&lt;span style="white-space: pre-wrap;"&gt;&lt;i&gt;&lt;br /&gt;&lt;/i&gt;&lt;/span&gt;&lt;/div&gt;&lt;div style="line-height: 1.38; margin-bottom: 0pt; margin-top: 0pt;"&gt;&lt;span style="white-space: pre-wrap;"&gt;&lt;i&gt;Extracted from The Design of Everyday Things by Don Norma&lt;/i&gt;n&lt;/span&gt;&lt;/div&gt;&lt;/div&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://lh5.googleusercontent.com/SDtatj0O93WS3B2v6UK7HL93XvDHzu_fFsN1XU53ofNuvHNEckdIBazy--x6LT2tTPAdGe2VOuV_7sLIr8W-z8z8Ats7El23u9swUfCUS3gmKCETBYaQx4-vsBcAr8xwaMhhWSA-=s72-c" width="72"/><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>Smartphone Camera Senses Patients' Pulse, Breathing Rate</title><link>https://www.engineeringinsights.in/2021/04/smartphone-camera-senses-patients-pulse.html</link><category>Artificial intelligence</category><category>Trending Technologies</category><pubDate>Sun, 18 Apr 2021 16:35:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-7313918416295546685</guid><description>&lt;p&gt;&lt;/p&gt;&lt;h2 style="clear: both; text-align: center;"&gt;AI app could enable doctors to take contactless vitals during telemedicine visits&lt;/h2&gt;&lt;div class="separator" style="clear: both; text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVHmWrCvanwu4NETu3ZmcD0AMikmpyhy2kJY3MebO-wNfFo8L-OyIAwlKv1Y5BBdnd8VwlkK3UNGcpBqa1LY211LIOTMXrhI_tWithF5wOMdE8zmsmjwJsggX4Z3iiGYMH7gwEfvplMMxB/s1240/Smartphone+Camera+Senses+Patients%2527+Pulse%252C+Breathing+Rate.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="746" data-original-width="1240" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVHmWrCvanwu4NETu3ZmcD0AMikmpyhy2kJY3MebO-wNfFo8L-OyIAwlKv1Y5BBdnd8VwlkK3UNGcpBqa1LY211LIOTMXrhI_tWithF5wOMdE8zmsmjwJsggX4Z3iiGYMH7gwEfvplMMxB/w663-h400/Smartphone+Camera+Senses+Patients%2527+Pulse%252C+Breathing+Rate.jpeg" width="663" /&gt;&lt;/a&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;&lt;br /&gt;&lt;/div&gt;&lt;div style="text-align: justify;"&gt;Telehealth visits increased dramatically when the pandemic began—by over 4000% in the U.S., by one account. But there’s a limit to what doctors can accomplish during these virtual appointments. Namely, they can’t check patients’ vital signs over the phone.&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;But new technologies in the works could change that by equipping phones with reliable software that can measure a person’s key biometrics. This month at a conference held by the Association for Computing Machinery, researchers presented machine learning systems that can generate a personalized model to measure heart and breathing rates based on a short video taken with a smartphone camera.&amp;nbsp;&amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;With just an 18-second video clip of a person’s head and shoulders, the algorithm can determine heart rate, or pulse, based on the changes in light intensity reflected off the skin. Breathing rate, or respiration, is gleaned from the rhythmic motion of their head, shoulders and chest.&amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Daniel McDuff, a principal researcher at Microsoft Research, and PhD student Xin Liu at the University of Washington developed the system. “Currently there’s no way to do remote vitals collection except for a very small minority of patients who have medical-grade devices at home,” such as a pulse oximeter to detect heart rate and blood oxygen level, or a blood pressure cuff, says McDuff.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Most people don’t own those devices, so for the vast majority of virtual appointments, patients must arrange separate in-person appointments to get these measurements. “That’s doubly inefficient. It takes twice the amount of time as a typical in-person visit, and with less human interaction,” McDuff says.&lt;/p&gt;&lt;p&gt;&lt;/p&gt;&lt;div style="text-align: justify;"&gt;Video-based software that can collect vitals during a telehealth appointment would greatly streamline virtual health care. Work on this type of technology arose around 2007&lt;/div&gt;&lt;div style="text-align: justify;"&gt;when digital cameras became sensitive enough to pick up small pixel-level changes in skin that indicate blood volume. The field saw a fresh wave of interest after telehealth visits increased during the early part of the COVID-19 pandemic.&lt;/div&gt;&lt;p&gt;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Several groups globally have been developing non-contact, video-based vitals sensing. A group out of Oxford is developing optical remote monitoring of vitals for patients in hospital intensive care units or undergoing kidney dialysis. Rice University researchers are developing a device that monitors vehicle drivers for heart attacks.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Google in February announced that its Android-based health tracking platform Google Fit will measure heart and respiratory rate using the phone’s camera. The user places a finger over the rear-facing camera on the phone to get heart rate, and a video of the user’s face gathers breathing rate. The software is meant for wellness purposes rather than medical use or doctor visits.&amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;The challenge facing researchers in this field is developing technologies that work consistently at a high level of accuracy in real-world settings, where faces and lighting vary. The approach developed by McDuff and Liu aims to address that.&amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;In their approach, heart rate is determined by measuring light reflected from the skin. “Variations in blood volume influences how light is reflected from the skin,” says McDuff. “So the camera is picking up micro-changes in light intensity and that can be used to recover a pulse signal. From that, we can derive heart rate variation and detect things like arrhythmias.”&lt;/p&gt;&lt;p style="text-align: justify;"&gt;The algorithm must account for variables such as skin colour, facial hair, lighting, and clothing. Those tend to trip up just about any kind of facial recognition technology, in part because the datasets on which machine learning algorithms are trained aren’t representative of our diverse population.&amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;McDuff’s model faces an added challenge: “Darker skin types have higher melanin, so the light reflectance intensity is going to be lower because more light is absorbed,” he says. That results in a weaker signal-to-noise ratio, making it harder to detect the pulse signal. “So it's about having a representative training set, and there’s also a fundamental physical challenge we need to solve here,” says McDuff.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;To address this challenge, the team developed a system with a personalized machine learning algorithm for each individual. “We proposed a learning algorithm to learn a person’s physiological signals quickly,” says Liu. The system can provide results with just 18 seconds of video, he says.&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Compared with a standard medical-grade device, the proposed method has a mean absolute error of one to three beats per minute in estimating heart rate, Liu says. This is acceptable in many applications.&amp;nbsp; &amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;The system isn’t ready for medical use and will need to be validated in clinical trials. To improve the robustness of the system, one approach the team is taking is to train models on computer-generated images. “We can actually synthesize high fidelity avatars that exhibit these blood flow patterns and respiration patterns, and we can train our algorithm on the computer-generated data,” says McDuff.&amp;nbsp;&amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;The technology could have both medical and fitness applications, the researchers say. In addition to telehealth visits, remote vitals can be useful for people with chronic health conditions who need frequent, accurate biometric measurements.&amp;nbsp;&lt;/p&gt;&lt;p style="text-align: justify;"&gt;Article originally published on&amp;nbsp;&lt;a href="https://spectrum.ieee.org/the-human-os/biomedical/devices/smartphone-camera-senses-patients-pulse-breathing-rate" rel="nofollow" target="_blank"&gt;IEEE Spectrum&lt;/a&gt;&lt;/p&gt;</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhVHmWrCvanwu4NETu3ZmcD0AMikmpyhy2kJY3MebO-wNfFo8L-OyIAwlKv1Y5BBdnd8VwlkK3UNGcpBqa1LY211LIOTMXrhI_tWithF5wOMdE8zmsmjwJsggX4Z3iiGYMH7gwEfvplMMxB/s72-w663-h400-c/Smartphone+Camera+Senses+Patients%2527+Pulse%252C+Breathing+Rate.jpeg" width="72"/><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>Root User in Ubuntu: Important Things You Should Know</title><link>https://www.engineeringinsights.in/2020/01/root-user-in-ubuntu-important-things.html</link><category>Opensource</category><category>Tutorial Spot</category><pubDate>Tue, 21 Jan 2020 07:51:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-814348544698234305</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;h1 class="entry-title" itemprop="headline" style="background-color: white; box-sizing: inherit; color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 30px; font-weight: 400; line-height: 1.2; margin: 0px 0px 10px;"&gt;
Root User in Ubuntu: Important Things You Should Know&amp;nbsp;&lt;/h1&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvhRYbx2EBgOIBEUBWO3pAst6_y4CXw8YdbDt9L7H5jRCGgbtMKW3OgH6HOmuO6yIffRsyoLBT4iCxvIxUK_l-ZR7-0y1eHk7f57e2DoVaoZ4dtAtW3d35Oe96NBRYi2YO-HWSDTm6cZoL/s1600/linux-sudo-vulnerability.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="380" data-original-width="728" height="334" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvhRYbx2EBgOIBEUBWO3pAst6_y4CXw8YdbDt9L7H5jRCGgbtMKW3OgH6HOmuO6yIffRsyoLBT4iCxvIxUK_l-ZR7-0y1eHk7f57e2DoVaoZ4dtAtW3d35Oe96NBRYi2YO-HWSDTm6cZoL/s640/linux-sudo-vulnerability.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px;"&gt;When you have just started using Linux, you’ll find many things that are different from Windows. One of those ‘different things’ is the concept of the root user.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="background-color: white; color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px;"&gt;Get to know about few important things about the root user in Ubuntu from&amp;nbsp;&lt;a href="https://itsfoss.com/root-user-ubuntu/" rel="nofollow" target="_blank"&gt;It's Foss&lt;/a&gt;.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="background-color: white; color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div style="background-color: white; box-sizing: inherit; color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px; margin-bottom: 30px; padding: 0px;"&gt;
Follow the links to learn the following from the author's website:&lt;/div&gt;
&lt;ul style="background-color: white; box-sizing: inherit; color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px; margin: 0px 0px 30px; padding: 0px 0px 0px 40px;"&gt;
&lt;li style="box-sizing: inherit; list-style-type: disc;"&gt;&lt;a href="https://itsfoss.com/root-user-ubuntu/#what-is-root" rel="nofollow" style="background-color: transparent; box-sizing: inherit; color: #00b6ba; transition: color 0.2s ease-in-out 0s, background-color 0.2s ease-in-out 0s;" target="_blank"&gt;Why root user is disabled in Ubuntu&lt;/a&gt;&lt;/li&gt;
&lt;li style="box-sizing: inherit; list-style-type: disc;"&gt;&lt;a href="https://itsfoss.com/root-user-ubuntu/#run-command-as-root" rel="nofollow" style="background-color: transparent; box-sizing: inherit; color: #00b6ba; transition: color 0.2s ease-in-out 0s, background-color 0.2s ease-in-out 0s;" target="_blank"&gt;Using commands as root&lt;/a&gt;&lt;/li&gt;
&lt;li style="box-sizing: inherit; list-style-type: disc;"&gt;&lt;a href="https://itsfoss.com/root-user-ubuntu/#become-root" rel="nofollow" style="background-color: transparent; box-sizing: inherit; color: #00b6ba; transition: color 0.2s ease-in-out 0s, background-color 0.2s ease-in-out 0s;" target="_blank"&gt;Switch to root user&lt;/a&gt;&lt;/li&gt;
&lt;li style="box-sizing: inherit; list-style-type: disc;"&gt;&lt;a href="https://itsfoss.com/root-user-ubuntu/#enable-root" rel="nofollow" style="background-color: transparent; box-sizing: inherit; color: #00b6ba; transition: color 0.2s ease-in-out 0s, background-color 0.2s ease-in-out 0s;" target="_blank"&gt;Unlock the root user&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;span style="background-color: white; color: #333333; font-family: &amp;quot;Source Sans Pro&amp;quot;, sans-serif; font-size: 18px;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvhRYbx2EBgOIBEUBWO3pAst6_y4CXw8YdbDt9L7H5jRCGgbtMKW3OgH6HOmuO6yIffRsyoLBT4iCxvIxUK_l-ZR7-0y1eHk7f57e2DoVaoZ4dtAtW3d35Oe96NBRYi2YO-HWSDTm6cZoL/s72-c/linux-sudo-vulnerability.jpg" width="72"/><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Researchers produce first laser ultrasound images of humans</title><link>https://www.engineeringinsights.in/2020/01/researchers-produce-first-laser.html</link><category>Biomedical</category><category>Home</category><category>Medical Technology</category><category>Research</category><category>Trending Technologies</category><pubDate>Thu, 9 Jan 2020 11:59:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-2497079328650182718</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; text-align: start;"&gt;&lt;span style="font-size: large;"&gt;Technique may help remotely image and assess health of infants, burn victims, and accident survivors in hard-to-reach places.&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjP9UhZgT9FTBGjiKr9aa8sN_4vsQlUNljkkmZ_S9eBFWaHA_tIIAvPJe4YFC2A1dk8qF0X3YEnLdtv1GQ4rkDk8QlRdNKdisU-9BR1MOynxLtk0Je0AVnwOcoobznuGXdyFFvzyMdjpVAW/s1600/MIT-Laser-Ultrasound_0.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="426" data-original-width="639" height="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjP9UhZgT9FTBGjiKr9aa8sN_4vsQlUNljkkmZ_S9eBFWaHA_tIIAvPJe4YFC2A1dk8qF0X3YEnLdtv1GQ4rkDk8QlRdNKdisU-9BR1MOynxLtk0Je0AVnwOcoobznuGXdyFFvzyMdjpVAW/s640/MIT-Laser-Ultrasound_0.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; text-align: start;"&gt;&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
For most people, getting an ultrasound is a relatively easy procedure: As a technician gently presses a probe against a patient’s skin, sound waves generated by the probe travel through the skin, bouncing off muscle, fat, and other soft tissues before reflecting back to the probe, which detects and translates the waves into an image of what lies beneath.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Conventional ultrasound doesn’t expose patients to harmful radiation as X-ray and CT scanners do, and it’s generally noninvasive. But it does require contact with a patient’s body, and as such, may be limiting in situations where clinicians might want to image patients who don’t tolerate the probe well, such as babies, burn victims, or other patients with sensitive skin. Furthermore, ultrasound probe contact induces significant image variability, which is a major challenge in modern ultrasound imaging.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Now, MIT engineers have come up with an alternative to conventional ultrasound that doesn’t require contact with the body to see inside a patient. The new laser ultrasound technique leverages an eye- and skin-safe laser system to remotely image the inside of a person. When trained on a patient’s skin, one laser remotely generates sound waves that bounce through the body. A second laser remotely detects the reflected waves, which researchers then translate into an image similar to conventional ultrasound.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In a paper published today by Nature in the journal Light: Science and Applications, the team reports generating the first laser ultrasound images in humans. The researchers scanned the forearms of several volunteers and observed common tissue features such as muscle, fat, and bone, down to about 6 centimetres below the skin. These images, comparable to conventional ultrasound, were produced using remote lasers focused on a volunteer from half a meter away.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“We’re at the beginning of what we could do with laser ultrasound,” says Brian W. Anthony, a principal research scientist in MIT’s Department of Mechanical Engineering and Institute for Medical Engineering and Science (TIMES), a senior author on the paper. “Imagine we get to a point where we can do everything ultrasound can do now, but at a distance. This gives you a whole new way of seeing organs inside the body and determining properties of deep tissue, without making contact with the patient.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Early concepts for non contact laser ultrasound for medical imaging originated from a Lincoln Laboratory program established by Rob Haupt of the Active Optical Systems Group and Chuck Wynn of the Advanced Capabilities and Technologies Group, who are co-authors on the new paper along with Matthew Johnson. From there, the research grew via collaboration with Anthony and his students, Xiang (Shawn) Zhang, who is now an MIT postdoc and is the paper’s first author, and recent doctoral graduate Jonathan Fincke, who is also a co-author. The project combined the Lincoln Laboratory researchers’ expertise in laser and optical systems with the Anthony group's experience with advanced ultrasound systems and medical image reconstruction.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Yelling into a canyon — with a flashlight&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In recent years, researchers have explored laser-based methods in ultrasound excitation in a field known as photo acoustics. Instead of directly sending sound waves into the body, the idea is to send in light, in the form of a pulsed laser tuned at a particular wavelength, that penetrates the skin and is absorbed by blood vessels.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The blood vessels rapidly expand and relax — instantly heated by a laser pulse then rapidly cooled by the body back to their original size — only to be struck again by another light pulse. The resulting mechanical vibrations generate sound waves that travel back up, where they can be detected by transducers placed on the skin and translated into a photo acoustic image.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
While photo acoustics uses lasers to remotely probe internal structures, the technique still requires a detector in direct contact with the body in order to pick up the sound waves. What’s more, light can only travel a short distance into the skin before fading away. As a result, other researchers have used photo acoustics to image blood vessels just beneath the skin, but not much deeper.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Since sound waves travel further into the body than light, Zhang, Anthony, and their colleagues looked for a way to convert a laser beam’s light into sound waves at the surface of the skin, in order to image deeper in the body.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Based on their research, the team selected 1,550-nano meter lasers, a wavelength which is highly absorbed by water (and is eye- and skin-safe with a large safety margin).&amp;nbsp; As skin is essentially composed of water, the team reasoned that it should efficiently absorb this light, and heat up and expand in response. As it oscillates back to its normal state, the skin itself should produce sound waves that propagate through the body.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The researchers tested this idea with a laser setup, using one pulsed laser set at 1,550 nano-meters to generate sound waves, and a second continuous laser, tuned to the same wavelength, to remotely detect reflected sound waves.&amp;nbsp; This second laser is a sensitive motion detector that measures vibrations on the skin surface caused by the sound waves bouncing off muscle, fat, and other tissues. Skin surface motion, generated by the reflected sound waves, causes a change in the laser’s frequency, which can be measured. By mechanically scanning the lasers over the body, scientists can acquire data at different locations and generate an image of the region.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“It’s like we’re constantly yelling into the Grand Canyon while walking along the wall and listening at different locations,” Anthony says. “That then gives you enough data to figure out the geometry of all the things inside that the waves bounced against — and the yelling is done with a flashlight.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In-home imaging&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The researchers first used the new setup to image metal objects embedded in a gelatin mold roughly resembling skin’s water content. They imaged the same gelatin using a commercial ultrasound probe and found both images were encouragingly similar. They moved on to image excised animal tissue — in this case, pig skin — where they found laser ultrasound could distinguish subtler features, such as the boundary between muscle, fat, and bone.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Finally, the team carried out the first laser ultrasound experiments in humans, using a protocol that was approved by the MIT Committee on the Use of Humans as Experimental Subjects. After scanning the forearms of several healthy volunteers, the researchers produced the first fully non contact laser ultrasound images of a human. The fat, muscle, and tissue boundaries are clearly visible and comparable to images generated using commercial, contact-based ultrasound probes.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The researchers plan to improve their technique, and they are looking for ways to boost the system’s performance to resolve fine features in the tissue. They are also looking to hone the detection laser’s capabilities. Further down the road, they hope to miniaturise the laser setup, so that laser ultrasound might one day be deployed as a portable device.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“I can imagine a scenario where you’re able to do this in the home,” Anthony says. “When I get up in the morning, I can get an image of my thyroid or arteries, and can have in-home physiological imaging inside of my body. You could imagine deploying this in the ambient environment to get an understanding of your internal state.”&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
This research was supported in part by the MIT Lincoln Laboratory Biomedical Line Program for the United States Air Force and by the U.S. Army Medical Research and Material Command's Military Operational Medicine Research Program.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: inherit;"&gt;&lt;i&gt;Content credits:&lt;span style="background-color: white; color: #222222; font-size: 15.75px; text-align: left;"&gt;Jennifer Chu&amp;nbsp;&lt;/span&gt;&amp;nbsp;&lt;a href="http://news.mit.edu/" rel="nofollow" style="text-align: left;" target="_blank"&gt;http://news.mit.edu/&lt;/a&gt;&lt;/i&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjP9UhZgT9FTBGjiKr9aa8sN_4vsQlUNljkkmZ_S9eBFWaHA_tIIAvPJe4YFC2A1dk8qF0X3YEnLdtv1GQ4rkDk8QlRdNKdisU-9BR1MOynxLtk0Je0AVnwOcoobznuGXdyFFvzyMdjpVAW/s72-c/MIT-Laser-Ultrasound_0.jpg" width="72"/><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Tool predicts how fast code will run on a chip</title><link>https://www.engineeringinsights.in/2020/01/tool-predicts-how-fast-code-will-run-on.html</link><category>Data Science</category><category>Machine-Learning</category><category>Trending Technologies</category><pubDate>Mon, 6 Jan 2020 19:06:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-8891635731227037397</guid><description>&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghynKczLqerQFTejT-YEAv4w1cEAWJ-pQVkiIr_tuXQalDoW4DtJRJ3Bn8AQEbaiZeoVEif5cg0x4DLKPkhZLcUzXdaT58ENInWts5PXjpxUS_khkMqgUCT-muK-UJJuYIBwtzxNO2P5WM/s1600/MIT-Evaluating-Performance_0.jpg" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="426" data-original-width="639" height="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghynKczLqerQFTejT-YEAv4w1cEAWJ-pQVkiIr_tuXQalDoW4DtJRJ3Bn8AQEbaiZeoVEif5cg0x4DLKPkhZLcUzXdaT58ENInWts5PXjpxUS_khkMqgUCT-muK-UJJuYIBwtzxNO2P5WM/s640/MIT-Evaluating-Performance_0.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: Times, Times New Roman, serif;"&gt;&lt;i&gt;The machine-learning system should enable developers to improve computing efficiency in a range of applications.&lt;/i&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, &amp;quot;Times New Roman&amp;quot;, serif;"&gt;MIT researchers have invented a machine-learning tool that predicts how fast computer chips will execute code from various applications.&amp;nbsp;&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;To get code to run as fast as possible, developers and compilers — programs that translate programming language into machine-readable code — typically use performance models that run the code through a simulation of given chip architectures.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Compilers use that information to automatically optimise code, and developers use it to tackle performance bottlenecks on the microprocessors that will run it. But performance models for machine code are handwritten by a relatively small group of experts and are not adequately validated. As a consequence, the simulated performance measurements often deviate from real-life results.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;In a series of conference papers, the researchers describe a novel machine-learning pipeline that automates this process, making it easier, faster, and more accurate. In a&amp;nbsp;paper&amp;nbsp;presented at the International Conference on Machine Learning in June, the researchers presented Ithemal, a neural-network model that trains on labelled data in the form of “basic blocks” — fundamental snippets of computing instructions — to automatically predict how long it takes a given chip to execute previously unseen basic blocks. Results suggest Ithemal performs far more accurately than traditional hand-tuned models.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Then, at the November IEEE International Symposium on Workload Characterization, the researchers&amp;nbsp;presented&amp;nbsp;a benchmark suite of basic blocks from a variety of domains, including machine learning, compilers, cryptography, and graphics that can be used to validate performance models. They pooled more than 300,000 of the profiled blocks into an open-source dataset called BHive.&amp;nbsp;During their evaluations, Ithemal predicted how fast Intel chips would run code even better than a performance model built by Intel itself.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Ultimately, developers and compilers can use the tool to generate code that runs faster and more efficiently on an ever-growing number of diverse and “black box” chip designs.&amp;nbsp;“Modern computer processors are opaque, horrendously complicated, and difficult to understand. It is also incredibly challenging to write computer code that executes as fast as possible for these processors,” says co-author Michael Carbin, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS) and a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “This tool is a big step forward toward fully modelling the performance of these chips for improved efficiency.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Most recently, in a&amp;nbsp;paper&amp;nbsp;presented at the NeurIPS conference in December, the team proposed a new technique to automatically generate compiler optimisations.&amp;nbsp;&amp;nbsp;Specifically, they automatically generate an algorithm, called Vemal, that converts certain code into vectors, which can be used for parallel computing. Vemal outperforms hand-crafted vectorisation algorithms used in the LLVM compiler — a popular compiler used in the industry.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Learning from data&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Designing performance models by hand can be “a black art,” Carbin says. Intel provides extensive documentation of more than 3,000 pages describing its chips’ architectures. But there currently exists only a small group of experts who will build performance models that simulate the execution of code on those architectures.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;“Intel’s documents are neither error-free nor complete, and Intel will omit certain things, because it’s proprietary,” Mendis says. “However, when you use data, you don’t need to know the documentation. If there’s something hidden, you can learn it directly from the data.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;To do so, the researchers clocked the average number of cycles a given microprocessor takes to compute basic block instructions — basically, the sequence of boot-up, execute, and shut down — without human intervention. Automating the process enables rapid profiling of hundreds of thousands or millions of blocks.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Domain-specific architectures&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;In training, the Ithemal model analyses millions of automatically profiled basic blocks to learn exactly how different chip architectures will execute computation. Importantly, Ithemal takes raw text as input and does not require manually adding features to the input data. In testing, Ithemal can be fed previously unseen basic blocks and a given chip and will generate a single number indicating how fast the chip will execute that code.&amp;nbsp;&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;The researchers found Ithemal cut error rates in accuracy —&amp;nbsp;meaning the difference between the predicted speed versus real-world speed —&amp;nbsp;by 50 per cent over traditional hand-crafted models. Further,&amp;nbsp;in their next&amp;nbsp;paper, they showed that&amp;nbsp;Ithemal’s error rate was 10 per cent, while the Intel performance-prediction model’s error rate was 20 per cent on a variety of basic blocks across multiple different domains.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;The tool now makes it easier to quickly learn performance speeds for any new chip architectures, Mendis says. For instance, domain-specific architectures, such as Google’s new Tensor Processing Unit used specifically for neural networks, are now being built but aren’t widely understood. “If you want to train a model on some new architecture, you just collect more data from that architecture, run it through our profiler, use that information to train Ithemal, and now you have a model that predicts performance,” Mendis says.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;Next, the researchers are studying methods to make models interpretable. Much of machine learning is a black box, so it’s not really clear why a particular model made its predictions. “Our model is saying it takes a processor, say, 10 cycles to execute a basic block. Now, we’re trying to figure out why,” Carbin says. “That’s a fine level of granularity that would be amazing for these types of tools.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;They also hope to use Ithemal to enhance the performance of Vemal even further and achieve better performance automatically.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif;"&gt;&lt;b&gt;&lt;i&gt;This article was originally published by MIT News. Follow this &lt;a href="http://news.mit.edu/2020/tool-how-fast-code-run-chip-0106" rel="nofollow" target="_blank"&gt;link&lt;/a&gt; to read related articles.&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEghynKczLqerQFTejT-YEAv4w1cEAWJ-pQVkiIr_tuXQalDoW4DtJRJ3Bn8AQEbaiZeoVEif5cg0x4DLKPkhZLcUzXdaT58ENInWts5PXjpxUS_khkMqgUCT-muK-UJJuYIBwtzxNO2P5WM/s72-c/MIT-Evaluating-Performance_0.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Wollongong NSW 2500, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-34.4278121 150.89306069999998</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.4540066 150.85272019999996 -34.401617599999994 150.9334012</georss:box><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>Welcome 2020</title><link>https://www.engineeringinsights.in/2020/01/welcome-2020.html</link><pubDate>Sun, 5 Jan 2020 10:40:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-8987181620366797769</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgcJP-oQwHnOdKN__Z_SCCyxEsralvAojYlAb_9JlC754M9LfY6FE1HD2drNs4X2itJd6f4JezWp56xDyh_BZg37ni8hZ_qPJkKRV_rag_7IKaBSfpKObJthPI8i9vPvzZC1ULjwTacrFi/s1600/22434.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="572" data-original-width="1600" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgcJP-oQwHnOdKN__Z_SCCyxEsralvAojYlAb_9JlC754M9LfY6FE1HD2drNs4X2itJd6f4JezWp56xDyh_BZg37ni8hZ_qPJkKRV_rag_7IKaBSfpKObJthPI8i9vPvzZC1ULjwTacrFi/s640/22434.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
&lt;span style="font-size: x-large;"&gt;&lt;br /&gt;&lt;/span&gt;
&lt;span style="font-size: x-large;"&gt;Stay tuned for the fascinating changes at www.engineeringinsights.in&lt;/span&gt;&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgcJP-oQwHnOdKN__Z_SCCyxEsralvAojYlAb_9JlC754M9LfY6FE1HD2drNs4X2itJd6f4JezWp56xDyh_BZg37ni8hZ_qPJkKRV_rag_7IKaBSfpKObJthPI8i9vPvzZC1ULjwTacrFi/s72-c/22434.jpg" width="72"/><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>This is what happens when engineers thinks too much about Christmas</title><link>https://www.engineeringinsights.in/2019/12/engineering-insights-wishes-all.html</link><category>EngineeersChristmas</category><pubDate>Sun, 8 Dec 2019 11:34:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-8835476903838666718</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;h3 style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/h3&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;h3 style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDyWZ5AyW_iL2cQ68bzUQTIaOviCwKwslFAbACXgq-Eg1DBAr3y-jtUH6O87rrgly6bEunCK72k2v-VqM-uzPwR5_tRs0MQW061FukRiM7_jJVMghxAlZdx4d2aPuQ_iJcpSuZ8xtM2zv4/s1600/dkbg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="1600" data-original-width="1600" height="640" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDyWZ5AyW_iL2cQ68bzUQTIaOviCwKwslFAbACXgq-Eg1DBAr3y-jtUH6O87rrgly6bEunCK72k2v-VqM-uzPwR5_tRs0MQW061FukRiM7_jJVMghxAlZdx4d2aPuQ_iJcpSuZ8xtM2zv4/s640/dkbg.png" width="640" /&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;div style="text-align: justify;"&gt;
No known species of reindeer can fly. But there are 300,000 species of living organisms yet to be classified, and while most of these are insects and germs, this does not completely rule out flying reindeer, which only Santa has seen.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
There are 2 billion children (under 18) in the world. But since Santa doesn't appear to handle Muslim, Hindu, Buddhist, and Jewish children, that reduces the work load to 15% of the total - 378 million or so. At an average rate of 3.5 children per household, that's 91.8 million homes. One presumes there's at least one good child in each.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Santa has 31 hours of Christmas to work with thanks to time zones and the rotation of the earth, assuming he travels east to west. This works out to 822.6 visits per second. This is to say that for each Christian household with good children, Santa has 1/1000th of a second to park, hop out of the sleigh, jump down the chimney, fill the stockings, distribute the remaining gifts under the tree, eat the snacks, get back up the a chimney, get back in the sleigh, and move on to the next house. Assuming that each of these 91.8 million homes are distributed evenly (which we know to be false but for the sake of these calculations we will accept) we are now talking about .78 miles per household, a total trip of 75 1/2 million miles, not counting bathroom stops.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
This means that Santa's sleigh is travelling at 650 miles per second, 3000 times the speed of sound. For comparison, the fastest man made vehicle, the Ulysses space probe moves at a poky 27.4 MPS; the average reindeer runs at 15 MPH.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The sleighs payload adds another interesting element. Assuming that each child gets nothing more than a medium sized LEGO set (2 pounds), the sleigh is carrying 321,300 tons not counting Santa, who is inexorably described as overweight. On land, confessional reindeer can pull no more than 300 pounds. Even granting that "flying reindeer" (see point one) could pull TEN TIMES the usual amount, we can not do the job with 8 or even 9, we need 214,000 reindeer. This increases the weight, not even counting the sleigh, to 353,430 tons. Again for comparison this is 4 times the weight of the Queen Elizabeth 2.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
353,000 tons travelling at 650 miles per second creates enormous air resistance. This will heat the reindeer in the same manner as a spacecraft re-entering the earth+s atmosphere. The lead pair of reindeer will absorb 14.2 QUINTILLION joules of energy. Per second. Each. In short, they will burst into flame almost instantaneously, exposing the next pair of reindeer, and creating deafening sonic booms in their wake. The entire team will be vaporised within 4.26 thousands of a second. Santa, meanwhile, will be subjected to centrifugal forces 17,500.06 times the force of gravity. A 300 pound Santa would be pinned to the back of his sleigh by 4,315,015 pounds of force.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Conclusion: There was a Santa, but he's dead now.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Article originally published on &lt;a href="http://www.rfcafe.com/miscellany/humor/engineer-thinks-too-much-about-christmas.htm" rel="nofollow" target="_blank"&gt;rfcafe&lt;/a&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDyWZ5AyW_iL2cQ68bzUQTIaOviCwKwslFAbACXgq-Eg1DBAr3y-jtUH6O87rrgly6bEunCK72k2v-VqM-uzPwR5_tRs0MQW061FukRiM7_jJVMghxAlZdx4d2aPuQ_iJcpSuZ8xtM2zv4/s72-c/dkbg.png" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Wollongong NSW 2500, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-34.4278121 150.89306069999998</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.4540066 150.85272019999996 -34.401617599999994 150.9334012</georss:box><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>A battery-free sensor for underwater exploration</title><link>https://www.engineeringinsights.in/2019/09/a-battery-free-sensor-for-underwater.html</link><category>Material Science and Engineering</category><category>Sensors</category><category>Trending Technologies</category><pubDate>Mon, 16 Sep 2019 11:32:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-3817600217193303644</guid><description>&lt;h3 style="clear: both; text-align: center;"&gt;
&lt;b&gt;The submerged system uses the vibration of “piezoelectric” materials to generate power and send and receive data.&lt;/b&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdnmAJWdyMZ5Nc3UMk58msMevb4j5f3booCjbpi6CuQCh5AQx-W9r7XufwFwnA4381TXgN-Sy0BAlRobHHRnljrIa3E7fhkpFeA3C6T6mOtBrFMHU6pB0gdgUDzQNvJn4a6I0rGt60oqW1/s1600/MIT-Battery-Free-Sensing-01.jpg" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="426" data-original-width="639" height="426" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdnmAJWdyMZ5Nc3UMk58msMevb4j5f3booCjbpi6CuQCh5AQx-W9r7XufwFwnA4381TXgN-Sy0BAlRobHHRnljrIa3E7fhkpFeA3C6T6mOtBrFMHU6pB0gdgUDzQNvJn4a6I0rGt60oqW1/s640/MIT-Battery-Free-Sensing-01.jpg" width="640" /&gt;&lt;/a&gt;&lt;/h3&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;To investigate the vastly unexplored oceans covering most of our planet, researchers aim to build a submerged network of interconnected sensors that send data to the surface — an underwater “internet of things.” But how to supply constant power to scores of sensors designed to stay for long durations in the ocean’s deep?&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;MIT researchers have an answer: a battery-free underwater communication system that uses near-zero power to transmit sensor data. The system could be used to monitor sea temperatures to study climate change and track marine life over long periods — and even sample waters on distant planets. They are presenting the system at the SIGCOMM conference this week, in a paper that has won the conference’s “best paper” award.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;The system makes use of two key phenomena. One, called the “piezoelectric effect,” occurs when vibrations in certain materials generate an electrical charge. The other is “backscatter,” a communication technique commonly used for RFID tags, that transmits data by reflecting modulated wireless signals off a tag and back to a reader.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;In the researchers’ system, a transmitter sends acoustic waves through water toward a piezoelectric sensor that has stored data. When the wave hits the sensor, the material vibrates and stores the resulting electrical charge. Then the sensor uses the stored energy to reflect a wave back to a receiver — or it doesn’t reflect one at all. Alternating between reflection in that way corresponds to the bits in the transmitted data: For a reflected wave, the receiver decodes a 1; for no reflected wave, the receiver decodes a 0.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;“Once you have a way to transmit 1s and 0s, you can send any information,” says co-author Fadel Adib, an assistant professor in the MIT Media Lab and the Department of Electrical Engineering and Computer Science and founding director of the Signal Kinetics Research Group. “Basically, we can communicate with underwater sensors based solely on the incoming sound signals whose energy we are harvesting.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;The researchers demonstrated their Piezo-Acoustic Backscatter System in an MIT pool, using it to collect water temperature and pressure measurements. The system was able to transmit 3 kilobits per second of accurate data from two sensors simultaneously at a distance of 10 meters between sensor and receiver.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;Applications go beyond our own planet. The system, Adib says, could be used to collect data in the recently discovered subsurface ocean on Saturn’s largest moon, Titan. In June, NASA announced the Dragonfly mission to send a rover in 2026 to explore the moon, sampling water reservoirs and other sites.&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;span style="color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px;"&gt;“How can you put a sensor under the water on Titan that lasts for long periods of time in a place that’s difficult to get energy?” says Adib, who co-wrote the paper with Media Lab researcher JunSu Jang. “Sensors that communicate without a battery open up possibilities for sensing in extreme environments.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;b&gt;Preventing deformation&lt;/b&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
Inspiration for the system hit while Adib was watching “Blue Planet,” a nature documentary series exploring various aspects of sea life. Oceans cover about 72 per cent of Earth’s surface. “It occurred to me how little we know of the ocean and how marine animals evolve and procreate,” he says. Internet-of-things (IoT) devices could aid that research, “but underwater you can’t use Wi-Fi or Bluetooth signals … and you don’t want to put batteries all over the ocean, because that raises issues with pollution.”&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
That led Adib to piezoelectric materials, which have been around and used in microphones and other devices for about 150 years. They produce a small voltage in response to vibrations. But that effect is also reversible: Applying voltage causes the material to deform. If placed underwater, that effect produces a pressure wave that travels through the water. They’re often used to detect sunken vessels, fish, and other underwater objects.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
“That reversibility is what allows us to develop a very powerful underwater backscatter communication technology,” Adib says.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
Communicating relies on preventing the piezoelectric resonator from naturally deforming in response to strain. At the heart of the system is a submerged node, a circuit board that houses a piezoelectric resonator, an energy-harvesting unit, and a microcontroller. Any type of sensor can be integrated into the node by programming the microcontroller. An acoustic projector (transmitter) and underwater listening device, called a hydrophone (receiver), are placed some distance away.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
Say the sensor wants to send a 0 bit. When the transmitter sends its acoustic wave at the node, the piezoelectric resonator absorbs the wave and naturally deforms, and the energy harvester stores a little charge from the resulting vibrations. The receiver then sees no reflected signal and decodes a 0.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
However, when the sensor wants to send a 1 bit, the nature changes. When the transmitter sends a wave, the microcontroller uses the stored charge to send a little voltage to the piezoelectric resonator. That voltage reorients the material’s structure in a way that stops it from deforming, and instead reflects the wave. Sensing a reflected wave, the receiver decodes a 1.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
&lt;b&gt;Long-term deep-sea sensing&lt;/b&gt;&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
The transmitter and receiver must have power but can be planted on ships or buoys, where batteries are easier to replace, or connected to outlets on land. One transmitter and one receiver can gather information from many sensors covering one area or many areas.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
“When you’re tracking a marine animal, for instance, you want to track it over a long-range and want to keep the sensor on them for a long period of time. You don’t want to worry about the battery running out,” Adib says. “Or, if you want to track temperature gradients in the ocean, you can get information from sensors covering a number of different places.”&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
Another interesting application is monitoring brine pools, large areas of brine that sit in pools in ocean basins, and are difficult to monitor long-term. They exist, for instance, on the Antarctic Shelf, where salt settles during the formation of sea ice and could aid in studying melting ice and marine life interaction with the pools. “We could sense what’s happening down there, without needing to keep hauling sensors up when their batteries die,” Adib says.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
Polly Huang, a professor of electrical engineering at Taiwan National University, praised the work for its technical novelty and potential impact on environmental science. “This is a cool idea,” Huang says. “It's not news one uses piezoelectric crystals to harvest energy … [but is the] first time to see it being used as a radio at the same time [which] is unheard of to the sensor network/system research community. Also interesting and unique is the hardware design and fabrication. The circuit and the design of the encapsulation are both sound and interesting.”&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
While noting that the system still needs more experimentation, especially in seawater, Huang adds that “this might be the ultimate solution for researchers in marine biography, oceanography, or even meteorology — those in need of long-term, low-human-effort underwater sensing.”&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
Next, the researchers aim to demonstrate that the system can work at farther distances and communicate with more sensors simultaneously. They’re also hoping to test if the system can transmit sound and low-resolution images.&lt;/div&gt;
&lt;div style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 15.75px; line-height: 1.6; padding: 0px 0px 25px; text-align: justify;"&gt;
The work is sponsored, in part, by the U.S Office of Naval Research.&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdnmAJWdyMZ5Nc3UMk58msMevb4j5f3booCjbpi6CuQCh5AQx-W9r7XufwFwnA4381TXgN-Sy0BAlRobHHRnljrIa3E7fhkpFeA3C6T6mOtBrFMHU6pB0gdgUDzQNvJn4a6I0rGt60oqW1/s72-c/MIT-Battery-Free-Sensing-01.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Wollongong NSW 2500, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-34.4278121 150.89306069999998</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.4540066 150.85272019999996 -34.401617599999994 150.9334012</georss:box><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>Electric Boats Could Be Floating Batteries for Island Microgrids</title><link>https://www.engineeringinsights.in/2019/08/electric-boats-could-be-floating.html</link><category>Energy and Power</category><category>Sustainable Power</category><category>Trending Technologies</category><pubDate>Sat, 17 Aug 2019 17:52:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-648239264748608781</guid><description>&lt;h3 style="text-align: justify;"&gt;
Researchers in Australia have developed a control algorithm that allows electric boats equipped with solar panels to sell power to a microgrid&lt;/h3&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqcIEQ_3h1-4A38QwMuQcrtUplF1b3ANKzZzzCK6gsA1GHdAnMOKICbvC9-izrCLczpHu4aKJcISVdP39-rK-u5JwUEdpcTO65SPDK4nnu0g1w04ZqGTmz8GqQiK8U1x1talzJG6T2r19k/s1600/MzM1NTM3Ng.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="746" data-original-width="1240" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqcIEQ_3h1-4A38QwMuQcrtUplF1b3ANKzZzzCK6gsA1GHdAnMOKICbvC9-izrCLczpHu4aKJcISVdP39-rK-u5JwUEdpcTO65SPDK4nnu0g1w04ZqGTmz8GqQiK8U1x1talzJG6T2r19k/s640/MzM1NTM3Ng.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In developed countries, lights roar to life with the flick of a switch and televisions hum quietly with the touch of a button—given you still have one of those. But on most of Indonesia’s remote islands, accessing electricity is neither simple nor convenient.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
For example—prior to 2018, diesel generators provided residents of East Kalimantan’s Berau district with electricity for just four hours a day. That June, a government-backed organization installed new hybrid microgrids, enabling residents to have electricity all day long, PV magazine reported. These hybrid microgrids were composed of photovoltaic solar panels (PVs) to collect energy and lithium-ion batteries to store it.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
But there may be another way to power remote islands, especially in the aftermath of natural disasters: boats. Yes, boats.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Researchers at the University of New South Wales in Sydney, Australia created an algorithm that can theoretically turn electric boats into small renewable power plants. They tested the algorithm with a microgrid in their lab, using four 6-volt gel batteries connected in a 24-V series as a stand-in for a boat.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In their experiment, they found that the algorithm could manage power flows reliably enough to allow electric boats to provide peak load support to a grid directly after a trip.&amp;nbsp;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
To implement this approach, they’d need an electric boat with its own PV system, which would charge the boat’s batteries when the boat was adrift. Then when the boat is docked, it could act as a small power plant, providing electricity to homes on the island.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
With the algorithm in place, boat owners could decide when to sell electricity—and how much they wanted to sell. They might, for example, set their system to automatically sell 10 per cent of its stored energy, and only if the batteries are at least halfway charged.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Boats are uniquely positioned to provide this kind of service, the researchers point out. Electric cars don’t generally have their own PV system. So instead of adding power to the grid-like, a boat could, electric cars draw from it.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The proposed technology works pretty similarly to the microgrids that are gradually rolling out in Indonesia—those microgrids also contain PVs to collect energy and lithium-ion batteries to store it. But there’s one key difference: portability.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
If Indonesia were hit with a natural disaster, those microgrids could be destroyed. Even Indonesia’s widely electrified islands may be impacted. With the new approach, the Indonesian government could use the boats it sent with food and supplies to also provide power.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The concept is still in its infancy, but the University of New South Wales team expects to get its algorithm out of the lab and into the ocean by testing it with an actual electric boat in the near future.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;a href="https://spectrum.ieee.org/" rel="nofollow" target="_blank"&gt;&lt;span style="color: black;"&gt;Content Credits :IEEE Spectrum&lt;/span&gt;&lt;/a&gt;&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqcIEQ_3h1-4A38QwMuQcrtUplF1b3ANKzZzzCK6gsA1GHdAnMOKICbvC9-izrCLczpHu4aKJcISVdP39-rK-u5JwUEdpcTO65SPDK4nnu0g1w04ZqGTmz8GqQiK8U1x1talzJG6T2r19k/s72-c/MzM1NTM3Ng.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Wollongong NSW 2500, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-34.4278121 150.89306069999998</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.4540066 150.85272019999996 -34.401617599999994 150.9334012</georss:box><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>Specialized AI Chips Hold Both Promise and Peril for Developers</title><link>https://www.engineeringinsights.in/2019/08/specialized-ai-chips-hold-both-promise.html</link><category>Artificial intelligence</category><category>Data Science</category><category>Machine-Learning</category><category>Semiconductors</category><category>Trending Technologies</category><pubDate>Mon, 12 Aug 2019 11:46:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-6670985029374680517</guid><description>&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGmUSzNerernLhV3jYUVA-4pFQl2s37U7a5dl7hrxPctLwEjrYuyZzzLJpbUvrSpUP6L_SFRBh-ua_nQGbpqlfEb7o4i7rMk-VBb5oUyWe4sAcx846Ibl9jLv8uokHIAJ1gxDnEAAVweOF/s1600/MzM1MTUzMg.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="373" data-original-width="620" height="384" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGmUSzNerernLhV3jYUVA-4pFQl2s37U7a5dl7hrxPctLwEjrYuyZzzLJpbUvrSpUP6L_SFRBh-ua_nQGbpqlfEb7o4i7rMk-VBb5oUyWe4sAcx846Ibl9jLv8uokHIAJ1gxDnEAAVweOF/s640/MzM1MTUzMg.jpeg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
When it comes to the compute-intensive field of AI, hardware vendors are reviving the performance gains we enjoyed at the height of Moore’s Law. The gains come from a new generation of specialized chips for AI applications like deep learning. But the fragmented microchip marketplace that’s emerging will lead to some hard choices for developers.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The new era of chip specialization for AI began when graphics processing units (GPUs), which were originally developed for gaming, were deployed for applications like deep learning. The same architecture that made GPUs render realistic images also enabled them to crunch data much more efficiently than central processing units (CPUs). A big step forward happened in 2007 when Nvidia released CUDA, a toolkit for making GPUs programmable in a general-purpose way.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
AI researchers need every advantage they can get when dealing with the unprecedented computational requirements of deep learning. GPU processing power has advanced rapidly, and chips originally designed to render images have become the workhorses powering world-changing AI research and development. Many of the linear algebra routines that are necessary to make Fortnite run at 120 frames per second are now powering the neural networks at the heart of cutting-edge applications of computer vision, automated speech recognition, and natural language processing.&amp;nbsp;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Now, the trend toward microchip specialization is turning into an arms race. Gartner projects that specialized chip sales for AI will double to around the US $8 billion in 2019 and reach more than $34 billion by 2023. Nvidia’s internal projections place the market for data centre GPUs (which are almost solely used to power deep learning) at $50 billion in the same time frame. In the next five years, we’ll see massive investments in custom silicon come to fruition from Amazon, ARM, Apple, IBM, Intel, Google, Microsoft, Nvidia, Qualcomm. There is also a slew of startups in the mix. CrunchBase estimates that AI chip companies, including Cerebras, Graphcore, Groq, Mythic AI, SambaNova Systems, and Wave Computing, have collectively raised more than $1 billion.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
To be clear, specialized AI chips are both important and welcomed, as they’re catalysts for transforming cutting-edge AI research into real-world applications. However, the flood of new AI chips, each one faster and more specialized than the next, will also seem like a throwback to the rise of enterprise software. We can expect cut-throat sales deals and software specialization aimed at locking developers into working with just one vendor.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Imagine if, 15 years ago, the cloud services AWS, Azure, Box, Dropbox, and GCP all came to market within 12 to 18 months. Their mission would have been to lock in as many businesses as possible—because once you’re on one platform, it’s hard to switch to another. This type of end-user gold rush is about to happen in AI, with tens of billions of dollars, and priceless research, at stake.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Chipmakers won’t be short on promises, and the benefits will be real. But it’s important for AI developers to understand that new chips that require new architectures could make their products slower to market—even with faster performance. In most cases, AI models are not going to be portable between different chip makers. Developers are well aware of the vendor lock-in risk posed by adopting higher-level cloud APIs, but in the past, the actual compute substrate has been standardized and homogeneous. This situation is going to change dramatically in the world of AI development.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
It's quite likely that more than half of the chip industry’s revenue will soon be driven by AI and deep learning applications. Just as software begets more software, AI begets more AI. We’ve seen it many times: Companies initially focus on one problem, but ultimately solve many. For example, major automakers are striving to bring autonomous cars to the road, and their cutting-edge work in deep learning and computer vision is already having a cascading effect; the research is leading to such offshoot projects as Ford’s delivery robots.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
As specialized AI chips come to market, the current chip giants and major cloud companies will probably strike exclusive deals or acquire top performing startups. This trend will fragment the AI market rather than unifying it. All that AI developers can do now is understand what’s about to happen and plan how they’ll weigh the benefits of a faster chip with the costs of building on new architectures.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Evan Sparks is CEO of Determined AI. He holds a PhD in computer science from the University of California, Berkeley, where his research focused on distributed systems for data analysis and machine learning.&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiGmUSzNerernLhV3jYUVA-4pFQl2s37U7a5dl7hrxPctLwEjrYuyZzzLJpbUvrSpUP6L_SFRBh-ua_nQGbpqlfEb7o4i7rMk-VBb5oUyWe4sAcx846Ibl9jLv8uokHIAJ1gxDnEAAVweOF/s72-c/MzM1MTUzMg.jpeg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Sydney NSW, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-33.8688197 151.20929550000005</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.712802200000006 149.91840200000004 -33.0248372 152.50018900000006</georss:box><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>Drag-and-drop data analytics</title><link>https://www.engineeringinsights.in/2019/08/drag-and-drop-data-analytics.html</link><category>Artificial intelligence</category><category>Data Science</category><category>Machine-Learning</category><category>Trending Technologies</category><pubDate>Sun, 4 Aug 2019 17:26:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-8406015023657091760</guid><description>&lt;h4&gt;
The system lets nonspecialists use machine-learning models to make predictions for medical research, sales, and more.&lt;/h4&gt;
&lt;img alt="For years, researchers from MIT and Brown University have been developing an interactive system that lets users drag-and-drop and manipulate data on any touchscreen, including smartphones and interactive whiteboards. Now, theyâ&#128;&#153;ve included a tool that instantly and automatically generates machine-learning models to run prediction tasks on that data." src="https://news.mit.edu/sites/mit.edu.newsoffice/files/styles/news_article_image_top_slideshow/public/images/2019/MIT-Touchscreen-Analytics_0.jpg?itok=NLgqwu5S" /&gt;&lt;br /&gt;
&lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decide on the best-performing AutoML tool.&amp;nbsp; &lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;b&gt;An “unbounded canvas” for analytics&lt;/b&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;img height="359" src="https://news.mit.edu/sites/mit.edu.newsoffice/files/images/touchscreen-analytics-2.gif" width="640" /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“It’s like a big, unbounded canvas where you can layout how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;img height="358" src="https://news.mit.edu/sites/mit.edu.newsoffice/files/images/touchscreen-analytics-3.gif" width="640" /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;b&gt;Approximating AutoML&lt;/b&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“Together with my co-authors, I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, maybe some outliers in the dataset that may indicate a problem.”&lt;/div&gt;
</description><georss:featurename xmlns:georss="http://www.georss.org/georss">Sydney NSW, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-33.8688197 151.20929550000005</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.712802200000006 149.91840200000004 -33.0248372 152.50018900000006</georss:box><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>For better deep neural network vision, just add feedback (loops)</title><link>https://www.engineeringinsights.in/2019/08/for-better-deep-neural-network-vision.html</link><category>Artificial intelligence</category><category>Research</category><category>Robotics</category><category>Trending Technologies</category><pubDate>Sat, 3 Aug 2019 15:30:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-2818707590744895105</guid><description>&lt;h2 style="text-align: justify;"&gt;
The DiCarlo lab finds that a recurrent architecture helps both artificial intelligence and our brains to better identify objects.&lt;/h2&gt;
&lt;table cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto; text-align: center;"&gt;&lt;tbody&gt;
&lt;tr&gt;&lt;td style="text-align: center;"&gt;&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-t1xJep1kdBTtZw5FPAl1HyobLwFqU_vLzS-ze8N7bdHiitQrn9op14IbmX6dfZGkETG8tiEjah9AnBozRDLgaAblLvy4n937-cT6Z3K8l780WUK_eylTb4nwBVtBGIZJCY-2bhaibFWj/s1600/d20130906170145-02.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"&gt;&lt;img border="0" data-original-height="426" data-original-width="745" height="363" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-t1xJep1kdBTtZw5FPAl1HyobLwFqU_vLzS-ze8N7bdHiitQrn9op14IbmX6dfZGkETG8tiEjah9AnBozRDLgaAblLvy4n937-cT6Z3K8l780WUK_eylTb4nwBVtBGIZJCY-2bhaibFWj/s640/d20130906170145-02.jpg" width="640" /&gt;&lt;/a&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class="tr-caption" style="text-align: right;"&gt;Source MIT News&lt;/td&gt;&lt;/tr&gt;
&lt;/tbody&gt;&lt;/table&gt;
&lt;div style="text-align: right;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Your ability to recognize objects is remarkable. If you see a cup under unusual lighting or from unexpected directions, there’s a good chance that your brain will still compute that it is a cup. Such precise object recognition is one holy grail for artificial intelligence developers, such as those improving self-driving car navigation.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;While modelling primate object recognition in the visual cortex has revolutionized artificial visual recognition systems, current deep learning systems are simplified, and fail to recognize some objects that are child’s play for primates such as humans.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;In findings published in Nature Neuroscience, McGovern Institute investigator James DiCarlo and colleagues have found evidence that feedback improves recognition of hard-to-recognize objects in the primate brain, and that adding feedback circuitry also improves the performance of artificial neural network systems used for vision applications.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Deep convolutional neural networks (DCNN) are currently the most successful models for accurately recognizing objects on a fast timescale (less than 100 milliseconds) and have a general architecture inspired by the primate ventral visual stream, cortical regions that progressively build an accessible and refined representation of viewed objects. Most DCNNs are simple in comparison to the primate ventral stream, however.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;“For a long period of time, we were far from a model-based understanding. Thus our field got started on this quest by modelling visual recognition as a feedforward process,” explains senior author DiCarlo, who is also the head of MIT’s Department of Brain and Cognitive Sciences and research co-leader in the Center for Brains, Minds, and Machines (CBMM). “However, we know there are recurrent anatomical connections in brain regions linked to object recognition.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Think of feedforward DCNNs, and the portion of the visual system that first attempts to capture objects, as a subway line that runs forward through a series of stations. The extra, recurrent brain networks are instead like the streets above, interconnected and not unidirectional. Because it only takes about 200 ms for the brain to recognize an object quite accurately, it was unclear if these recurrent interconnections in the brain had any role at all in core object recognition. Perhaps those recurrent connections are only in place to keep the visual system in tune over long periods of time. For example, the return gutters of the streets help slowly clear it of water and trash but are not strictly needed to quickly move people from one end of town to the other. DiCarlo, along with lead author and CBMM postdoc Kohitij Kar, set out to test whether a subtle role of recurrent operations in rapid visual object recognition was being overlooked.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Challenging recognition&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;The authors first needed to identify objects that are trivially decoded by the primate brain but are challenging for artificial systems. Rather than trying to guess why deep learning was having problems recognizing an object (is it due to the clutter in the image? a misleading shadow?), the authors took an unbiased approach that turned out to be critical.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Kar explains further that “we realized that AI models actually don’t have problems with every image where an object is occluded or in clutter. Humans trying to guess why AI models were challenged turned out to be holding us back.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Instead, the authors presented the deep learning system, as well as monkeys and humans, with images, homing in on "challenge images" where the primates could easily recognize the objects in those images, but a feedforward DCNN ran into problems. When they, and others, added appropriate recurrent processing to these DCNNs, object recognition in challenge images suddenly became a breeze.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;b&gt;Processing times&lt;/b&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Kar used neural recording methods with very high spatial and temporal precision to determine whether these images were really so trivial for primates. Remarkably, they found that although challenge images had initially appeared to be child’s play to the human brain, they actually involve extra neural processing time (about an additional 30 ms), suggesting that recurrent loops operate in our brain, too.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&amp;nbsp;“What the computer vision community has recently achieved by stacking more and more layers onto artificial neural networks, evolution has achieved through a brain architecture with recurrent connections," says Kar.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;Diane Beck, professor of psychology and co-chair of the Intelligent Systems Theme at the Beckman Institute and not an author on the study, explains further. “Since entirely feedforward deep convolutional nets are now remarkably good at predicting primate brain activity, it raised questions about the role of feedback connections in the primate brain. This study shows that, yes, feedback connections are very likely playing a role in object recognition after all.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;What does this mean for a self-driving car? It shows that deep learning architectures involved in object recognition need recurrent components if they are to match the primate brain, and also indicates how to operationalize this procedure for the next generation of intelligent machines.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;“Recurrent models offer predictions of neural activity and behaviour over time," says Kar. “We may now be able to model more involved tasks. Perhaps one day, the systems will not only recognize an object, such as a person but also perform cognitive tasks that the human brain so easily manages, such as understanding the emotions of other people.”&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;This work was supported by the Office of Naval Research and the Center for Brains, Minds, and Machines through the National Science Foundation.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj-t1xJep1kdBTtZw5FPAl1HyobLwFqU_vLzS-ze8N7bdHiitQrn9op14IbmX6dfZGkETG8tiEjah9AnBozRDLgaAblLvy4n937-cT6Z3K8l780WUK_eylTb4nwBVtBGIZJCY-2bhaibFWj/s72-c/d20130906170145-02.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Sydney NSW, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-33.8688197 151.20929550000005</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.712802200000006 149.91840200000004 -33.0248372 152.50018900000006</georss:box><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>We are now back in action</title><link>https://www.engineeringinsights.in/2019/08/we-are-now-back-in-action.html</link><category>Home</category><pubDate>Sat, 3 Aug 2019 14:59:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-241552969515901817</guid><description>&lt;h2 style="text-align: center;"&gt;
&lt;br /&gt;&lt;/h2&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzHqC87ubfqt6AvjbAtynH0wYbixCTvazdROtDt2JYrUpEVVt7vNdNFfF3wiv-EIl2UVOHGs3W7xBSLPmM43Zp0uNfaaHjvdsv63tX-mYoDDzsPwZn5IoZTkLeUvp3I66jjgRT1w89MiOr/s1600/back-to-work-now2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="800" data-original-width="1600" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzHqC87ubfqt6AvjbAtynH0wYbixCTvazdROtDt2JYrUpEVVt7vNdNFfF3wiv-EIl2UVOHGs3W7xBSLPmM43Zp0uNfaaHjvdsv63tX-mYoDDzsPwZn5IoZTkLeUvp3I66jjgRT1w89MiOr/s640/back-to-work-now2.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="font-size: x-large;"&gt;&lt;u&gt;&lt;br /&gt;&lt;/u&gt;&lt;/span&gt;&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzHqC87ubfqt6AvjbAtynH0wYbixCTvazdROtDt2JYrUpEVVt7vNdNFfF3wiv-EIl2UVOHGs3W7xBSLPmM43Zp0uNfaaHjvdsv63tX-mYoDDzsPwZn5IoZTkLeUvp3I66jjgRT1w89MiOr/s72-c/back-to-work-now2.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Wollongong NSW 2500, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-34.4278121 150.89306069999998</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.4540066 150.85272019999996 -34.401617599999994 150.9334012</georss:box><author>insightsonengineering@gmail.com (Manu Reghukumar)</author></item><item><title>A novel data-compression technique for faster computer programs</title><link>https://www.engineeringinsights.in/2019/04/a-novel-data-compression-technique-for.html</link><category>Trending Technologies</category><pubDate>Sat, 20 Apr 2019 16:40:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-8673935617114976316</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipUSOTabbcDf6nPMYIq9yzeBJz3hCvJ9TdEZujk7stypmaTGQVlrnbYq9Og8lLhQwEXXJDydRhJofzAtcEMbZA-9Arex83hp1FKmiUneph4bW7iQBMwrBjFzNz3tDtq9fy113-EPkF1QfR/s1600/MIT-Compression-Object_0.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="426" data-original-width="639" height="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipUSOTabbcDf6nPMYIq9yzeBJz3hCvJ9TdEZujk7stypmaTGQVlrnbYq9Og8lLhQwEXXJDydRhJofzAtcEMbZA-9Arex83hp1FKmiUneph4bW7iQBMwrBjFzNz3tDtq9fy113-EPkF1QfR/s640/MIT-Compression-Object_0.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;h3 style="text-align: justify;"&gt;
Researchers free up more bandwidth by compressing “objects” within the memory hierarchy.&lt;/h3&gt;
&lt;div style="text-align: justify;"&gt;
A novel technique developed by MIT researchers rethinks hardware data compression to free up more memory used by computers and mobile devices, allowing them to run faster and perform more tasks simultaneously.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Data compression leverages redundant data to free up storage capacity, boost computing speeds, and provide other perks. In current computer systems, accessing main memory is very expensive compared to actual computation. Because of this, using data compression in the memory helps improve performance, as it reduces the frequency and amount of data programs need to fetch from main memory.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Memory in modern computers manages and transfers data in fixed-size chunks, on which traditional compression techniques must operate. Software, however, doesn’t naturally store its data in fixed-size chunks. Instead, it uses “objects,” data structures that contain various types of data and have variable sizes. Therefore, traditional hardware compression techniques handle objects poorly.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In a paper being presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems this week, the MIT researchers describe the first approach to compress objects across the memory hierarchy. This reduces memory usage while improving performance and efficiency.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Programmers could benefit from this technique when programming in any modern programming language — such as Java, Python, and Go — that stores and manages data in objects, without changing their code. On their end, consumers would see computers that can run much faster or can run many more apps at the same speeds. Because each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In experiments using a modified Java virtual machine, the technique compressed twice as much data and reduced memory usage by half over traditional cache-based methods.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“The motivation was trying to come up with a new memory hierarchy that could do object-based compression, instead of cache-line compression, because that’s how most modern programming languages manage data,” says first author Po-An Tsai, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“All computer systems would benefit from this,” adds co-author Daniel Sanchez, a professor of computer science and electrical engineering, and a researcher at CSAIL. “Programs become faster because they stop being bottlenecked by memory bandwidth.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The researchers built on their prior work that restructures the memory architecture to directly manipulate objects. Traditional architectures store data in blocks in a hierarchy of progressively larger and slower memories, called “caches.” Recently accessed blocks rise to the smaller, faster caches, while older blocks are moved to slower and larger caches, eventually ending back in main memory. While this organization is flexible, it is costly: To access memory, each cache needs to search for the address among its contents.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“Because the natural unit of data management in modern programming languages is objects, why not just make a memory hierarchy that deals with objects?” Sanchez says.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In a paper published last October, the researchers detailed a system called Hotpads, that stores entire objects, tightly packed into hierarchical levels, or “pads.” These levels reside entirely on efficient, on-chip, directly addressed memories — with no sophisticated searches required.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Programs then directly reference the location of all objects across the hierarchy of pads. Newly allocated and recently referenced objects, and the objects they point to, stay in the faster level. When the faster level fills, it runs an “eviction” process that keeps recently referenced objects but kicks down older objects to slower levels and recycles objects that are no longer useful, to free up space. Pointers are then updated in each object to point to the new locations of all moved objects. In this way, programs can access objects much more cheaply than searching through cache levels.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
For their new work, the researchers designed a technique, called “Zippads,” that leverages the Hotpads architecture to compress objects. When objects first start at the faster level, they’re uncompressed. But when they’re evicted to slower levels, they’re all compressed. Pointers in all objects across levels then point to those compressed objects, which makes them easy to recall back to the faster levels and able to be stored more compactly than prior techniques.&amp;nbsp;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
A compression algorithm then leverages redundancy across objects efficiently. This technique uncovers more compression opportunities than previous techniques, which were limited to finding redundancy within each fixed-size block. The algorithm first picks a few representative objects as “base” objects. Then, in new objects, it only stores the different data between those objects and the representative base objects.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Brandon Lucia, an assistant professor of electrical and computer engineering at Carnegie Mellon University, praises the work for leveraging features of object-oriented programming languages to better compress memory. “Abstractions like object-oriented programming are added to a system to make programming simpler, but often introduce a cost in the performance or efficiency of the system,” he says. “The interesting thing about this work is that it uses the existing object abstraction as a way of making memory compression more effective, in turn making the system faster and more efficient with novel computer architecture features.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Content Credits:&lt;a href="http://news.mit.edu/" rel="nofollow" target="_blank"&gt; MIT News Office&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEipUSOTabbcDf6nPMYIq9yzeBJz3hCvJ9TdEZujk7stypmaTGQVlrnbYq9Og8lLhQwEXXJDydRhJofzAtcEMbZA-9Arex83hp1FKmiUneph4bW7iQBMwrBjFzNz3tDtq9fy113-EPkF1QfR/s72-c/MIT-Compression-Object_0.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Wollongong NSW 2500, Australia</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">-34.4278121 150.89306069999998</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">-34.4540066 150.85272019999996 -34.401617599999994 150.9334012</georss:box><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Technique identifies electricity-producing bacteria</title><link>https://www.engineeringinsights.in/2019/01/technique-identifies-electricity.html</link><category>Energy and Power</category><category>Mechanical Engineering</category><category>Microfluidics</category><category>Research</category><pubDate>Sun, 13 Jan 2019 17:15:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-7337673229236431533</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="background-color: white; color: #222222; font-size: 26.25px;"&gt;&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif;"&gt;Microbes screened with a new microfluidic process might be used in power generation or environmental cleanup.&lt;/span&gt;&lt;/span&gt;&lt;/div&gt;
&lt;img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdc8yrBYD1HokgkNHjP5yqTznzuSE6O9DyRG1ZU7l55C129irQSwZTHTE15cZWEG9ZDcWaN4w5eIVAdkDpEadMYncyHvn4qcVJL4sWQE48v-WGPCsZJXisjYvNGMy3Z18TEcC23I032S7c/s640/MIT-Electric-Bacteria_0.jpg" /&gt;&lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;Living in extreme conditions requires creative adaptations. For certain species of bacteria that exist in oxygen-deprived environments, this means finding a way to breathe that doesn’t involve oxygen. These hardy microbes, which can be found deep within mines, at the bottom of lakes, and even in the human gut, have evolved a unique form of breathing that involves excreting and pumping out electrons. In other words, these microbes can actually produce electricity.&lt;/span&gt;&lt;/div&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;&lt;/span&gt;&lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: times, &amp;quot;times new roman&amp;quot;, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;
&lt;span style="font-family: times, &amp;quot;times new roman&amp;quot;, serif; font-size: large;"&gt;Scientists and engineers are exploring ways to harness these microbial power plants to run fuel cells and purify sewage water, among other uses. But pinning down a microbe’s electrical properties has been a challenge: The cells are much smaller than mammalian cells and extremely difficult to grow in laboratory conditions.&lt;/span&gt;&lt;/div&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Now MIT engineers have developed a microfluidic technique that can quickly process small samples of bacteria and gauge a specific property that’s highly correlated with bacteria’s ability to produce electricity. They say that this property, known as polarizability, can be used to assess a bacteria’s electrochemical activity in a safer, more efficient manner compared to current techniques.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“The vision is to pick out those strongest candidates to do the desirable tasks that humans want the cells to do,” says Qianru Wang, a postdoc in MIT’s Department of Mechanical Engineering.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“There is recent work suggesting there might be a much broader range of bacteria that have [electricity-producing] properties,” adds Cullen Buie, associate professor of mechanical engineering at MIT. “Thus, a tool that allows you to probe those organisms could be much more important than we thought. It’s not just a small handful of microbes that can do this.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Buie and Wang have published their results today in Science Advances.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;i&gt;Just between frogs&lt;/i&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Bacteria that produce electricity do so by generating electrons within their cells, then transferring those electrons across their cell membranes via tiny channels formed by surface proteins, in a process known as extracellular electron transfer, or EET.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Existing techniques for probing bacteria’s electrochemical activity involve growing large batches of cells and measuring the activity of EET proteins — a meticulous, time-consuming process. Other techniques require rupturing a cell in order to purify and probe the proteins. Buie looked for a faster, less destructive method to assess bacteria’s electrical function.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
For the past 10 years, his group has been building microfluidic chips etched with small channels, through which they flow microliter-samples of bacteria. Each channel is pinched in the middle to form an hourglass configuration. When a voltage is applied across a channel, the pinched section — about 100 times smaller than the rest of the channel — puts a squeeze on the electric field, making it 100 times stronger than the surrounding field. The gradient of the electric field creates a phenomenon known as dielectrophoresis, or a force that pushes the cell against its motion induced by the electric field. As a result, dielectrophoresis can repel a particle or stop it in its tracks at different applied voltages, depending on that particle’s surface properties.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Researchers including Buie have used dielectrophoresis to quickly sort bacteria according to general properties, such as size and species. This time around, Buie wondered whether the technique could suss out bacteria’s electrochemical activity — a far more subtle property.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“Basically, people were using dielectrophoresis to separate bacteria that were as different as, say, a frog from a bird, whereas we’re trying to distinguish between frog siblings — tinier differences,” Wang says.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;i&gt;An electric correlation&lt;/i&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In their new study, the researchers used their microfluidic setup to compare various strains of bacteria, each with a different, known electrochemical activity. The strains included  a “wild-type” or natural strain of bacteria that actively produces electricity in microbial fuel cells, and several strains that the researchers had genetically engineered. In general, the team aimed to see whether there was a correlation between a bacteria’s electrical ability and how it behaves in a microfluidic device under a dielectrophoretic force.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The team flowed very small, microliter samples of each bacterial strain through the hourglass-shaped microfluidic channel and slowly amped up the voltage across the channel, one volt per second, from 0 to 80 volts. Through an imaging technique known as particle image velocimetry, they observed that the resulting electric field propelled bacterial cells through the channel until they approached the pinched section, where the much stronger field acted to push back on the bacteria via dielectrophoresis and trap them in place.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Some bacteria were trapped at lower applied voltages, and others at higher voltages. Wang took note of the “trapping voltage” for each bacterial cell, measured their cell sizes,  and then used a computer simulation to calculate a cell’s polarizability — how easy it is for a cell to form electric dipoles in response to an external electric field.&amp;nbsp;&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
From her calculations, Wang discovered that bacteria that were more electrochemically active tended to have a higher polarizability. She observed this correlation across all species of bacteria that the group tested.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“We have the necessary evidence to see that there’s a strong correlation between polarizability and electrochemical activity,” Wang says. “In fact, polarizability might be something we could use as a proxy to select microorganisms with high electrochemical activity.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Wang says that, at least for the strains they measured, researchers can gauge their electricity production by measuring their polarizability — something that the group can easily, efficiently, and nondestructively track using their microfluidic technique.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Collaborators on the team are currently using the method to test new strains of bacteria that have recently been identified as potential electricity producers.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“If the same trend of correlation stands for those newer strains, then this technique can have a broader application, in clean energy generation, bioremediation, and biofuels production,” Wang says.&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
This research was supported in part by the National Science Foundation, and the Institute for Collaborative Biotechnologies, through a grant from the U.S. Army.&lt;/div&gt;
&lt;/span&gt;&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhdc8yrBYD1HokgkNHjP5yqTznzuSE6O9DyRG1ZU7l55C129irQSwZTHTE15cZWEG9ZDcWaN4w5eIVAdkDpEadMYncyHvn4qcVJL4sWQE48v-WGPCsZJXisjYvNGMy3Z18TEcC23I032S7c/s72-c/MIT-Electric-Bacteria_0.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Cherthala, Kerala, India</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">9.6836368 76.336537700000008</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">9.6210243 76.25585670000001 9.7462493 76.4172187</georss:box><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Introducing Scratch 3.0</title><link>https://www.engineeringinsights.in/2019/01/introducing-scratch-30.html</link><category>Coding</category><category>Featured News</category><category>Opensource</category><category>Software Training</category><pubDate>Sun, 13 Jan 2019 13:40:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-8636834780630399519</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div style="text-align: justify;"&gt;
&lt;iframe align="center" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/R_Hc_eJ-ldY" width="560"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;
The Lifelong Kindergarten group at the MIT Media Lab has launched &lt;a href="https://scratch.mit.edu/" rel="nofollow" target="_blank"&gt;Scratch 3.0&lt;/a&gt;, a new version of the creative coding platform for kids. The latest updates include:&lt;/div&gt;
&lt;div&gt;
&lt;div style="text-align: justify;"&gt;
•    extensions for LEGO robotics, Makey Makey, micro:bit, Google Translate, and Amazon Text-to-Speech;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
•    an ideas section with new video tutorials and inspiration for activities;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
•    full coding curricula from Raspberry Pi Code Club, Google CS First, and the ScratchEd Creative Computing Curriculum Guide;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
•    new characters, sounds, and backgrounds, and improved paint and sound editing tools; and&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
•    compatibility on all current browsers and a wide variety of touch devices like tablets, as well as an offline version.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Over the past decade, 35 million kids in over 150 countries around the world have used Scratch to create their own animations, games, and other interactive projects while learning the basics of coding. Scratch is used in schools, libraries, and homes across the globe, giving parents and educators the tools to build coding literacy while helping kids gain confidence with new technologies in a fun, creative environment.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“As kids create and share projects with Scratch, they learn to think creatively, reason systematically, and work collaboratively — essential skills for everyone in today’s society,” says Mitchel Resnick, the LEGO Papert Professor of Learning Research at the MIT Media Lab and director of the Lifelong Kindergarten group, where Scratch was created.&lt;br /&gt;
&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_Nkt3usYLNMQQFXhUKXBAlpR4YAx7Utd0mN-WnAAp_pS1QaltEfUM1NC9yX0EwqMgf2TUfArWvjLWzQ5pabqvQW2Hwa0f2iuITWQcBdTbW4t5ocvQc-Ind7UmbOiX9vkswxbh_ggF8E7C/s1600/Scratch3.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="426" data-original-width="639" height="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_Nkt3usYLNMQQFXhUKXBAlpR4YAx7Utd0mN-WnAAp_pS1QaltEfUM1NC9yX0EwqMgf2TUfArWvjLWzQ5pabqvQW2Hwa0f2iuITWQcBdTbW4t5ocvQc-Ind7UmbOiX9vkswxbh_ggF8E7C/s640/Scratch3.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;br /&gt;
Scratch is founded on the constructionist learning theory developed by Seymour Papert, one of the Media Lab’s founding faculty members. Resnick, a protégé and longtime thought partner of Papert’s, brings those constructionist tenets into every aspect of the Lifelong Kindergarten group’s work. Resnick has distilled his vision of creative learning into his principles of &lt;a href="https://mitpress.mit.edu/books/lifelong-kindergarten" rel="nofollow" target="_blank"&gt;Projects, Passion, Peers, and Play&lt;/a&gt; — a credo that also serves as a mission statement for Scratch.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The Scratch coding tools are integrated into a vibrant online community — a global online forum and playground where kids can collaborate on projects, offer comments and feedback, and find like-minded peers with whom to create and play. With 3.0, the Scratch team of developers, moderators, and designers has gone all in on the community’s capabilities and potential, drawing on experiences from Scratchers who have shared their personal stories of making friends, discovering passions, and finding a sense of belonging.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The new version optimizes the platform’s collaborative and interactive suite of tools; for example, new language translation blocks allow for greater cross-cultural connections.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“Scratch 3.0 expands how, what, and where kids can create with code,” says Resnick. “We can’t wait to see what kids create with Scratch 3.0.”&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://img.youtube.com/vi/R_Hc_eJ-ldY/default.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Cherthala, Kerala, India</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">9.6836368 76.336537700000008</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">9.6210243 76.25585670000001 9.7462493 76.4172187</georss:box><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Face-Scanning AI Identifies Rare Genetic Disorders</title><link>https://www.engineeringinsights.in/2019/01/face-scanning-ai-identifies-rare.html</link><category>Deep Learning</category><category>Machine-Learning</category><category>Research</category><pubDate>Sun, 13 Jan 2019 09:58:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-1072120502420982742</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;h2&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;Deep learning algorithms spot genetic disorders better than doctors can by analyzing a patient's facial features&lt;/span&gt;&lt;/h2&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
&lt;br /&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1ZyOSr53e6PDoNZIGZVFP6eOP7DSjW0PICkwWay5iYkJ1gl0S5kWoWflkf_pke-7jWnz_IzALGC9oFqijlOkyzquPE5I1244CCOU1sau4mXN4SSbgAzPc0Tz0mv6KIGCeQ0uXGCpA8Dr_/s1600/MzIwODAxMQ1.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="930" data-original-width="1240" height="300" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1ZyOSr53e6PDoNZIGZVFP6eOP7DSjW0PICkwWay5iYkJ1gl0S5kWoWflkf_pke-7jWnz_IzALGC9oFqijlOkyzquPE5I1244CCOU1sau4mXN4SSbgAzPc0Tz0mv6KIGCeQ0uXGCpA8Dr_/s400/MzIwODAxMQ1.jpg" width="400" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
The photograph is cropped close on the face of four-year-old Yael, who is smiling and looking as healthy as can be. But a computer analysis of her features says something’s not right. She has MR XL Bain Type, the computer predicts—a very rare syndrome that causes a wide range of health problems.&lt;br /&gt;&lt;br /&gt;It turned out that the computer was right. &lt;br /&gt;&lt;br /&gt;Yael is one of thousands of children who have contributed to the development of an artificial intelligence system called Deep Gestalt that can identify rare genetic disorders based on facial features alone. The system, built by Boston-based FDNA, analyzes photographs of faces using computer vision and deep learning algorithms. &lt;br /&gt;&lt;br /&gt;In an article published this week in the journal Nature Medicine, FDNA announced the results of a study of DeepGestalt involving 17,000 children, with more than 200 different syndromes among them. The system outperformed clinicians’ ability to identify disease in three separate experiments.&lt;br /&gt;&lt;br /&gt;In its best performance, the AI system correctly distinguished between different subtypes of the genetic disorder Noonan syndrome in 64 percent of the cases. Clinicians looking at images of people with Noonan syndrome in previous studies identified the disease correctly in only 20 percent of the cases.&lt;br /&gt;&lt;br /&gt;Combined with DNA sequencing, DeepGestalt could prove useful in helping to identify disease, says Yaron Gurovich, chief technology officer at FDNA. “Some people call it deep phenotyping,” he says. “It’s the ability to get accurate and deep insights on a person and link them correctly to genes that were found as problematic in a [DNA] sequencing process.” &lt;br /&gt;&lt;br /&gt;The tool could also help standardize the methods doctors use when looking for visual signs of disease, the study authors say. Trying to describe why a person’s facial features are phenotypic expressions of a disease can be challenging. “It’s like when you look at a child and you look at the mother and you know they’re related, but you’re not able to say why,” says Gurovich. “That’s the difference between [a doctor] looking at the facial features and our Gestalt algorithm. It finds a link that we can’t really describe.”&lt;br /&gt;&lt;br /&gt;For example, people with Cornelia de Lange syndrome tend to have a small nose, arched eyebrows, and an atypical mouth. But other syndromes, such as the one Yael has, manifest in different ways or aren’t so readily apparent. (In psychology, Gestalt theory "emphasizes that the whole of anything is greater than its parts," according to Britannica.) &lt;img height="227" src="https://spectrum.ieee.org/image/MzIwODAzNA.jpeg" width="640" /&gt;Image: FDNA/Nature Medicine.This heat map illustrates facial features that influence the algorithms' predictions.&lt;br /&gt;&lt;br /&gt;How the algorithms accomplish the task is a black box—a frustrating problem in many AI systems. To get a peek into the algorithms’ methods, the researchers created a color-coded map of the “hot” areas of the face—those that influence the computer’s predictions. It provides a “visualization for our users to try to look inside the black box and understand what the algorithm thinks and how it chose its results,” says Gurovich.&lt;br /&gt;&lt;br /&gt;FDNA has analyzed more than 150,000 cases to date. The company amassed its database by building a community platform called Face2Gene that clinical geneticists can use for free. The doctors upload images into the system (with consent from the patient) and in return get to use the platform to help them narrow down the disease possibilities of their patients. &lt;br /&gt;&lt;br /&gt;The system provides the doctors with a short list of about 10 possible syndromes the patient might have—not so much a diagnosis, but an aid to help the doctors narrow down the possibilities. Gurovich says 70 percent of clinical geneticists worldwide are using the Face2Gene system. &lt;br /&gt;&lt;br /&gt;Those clinicians are getting reliable results, according to the new study. In a fourth experiment, Deep Gestalt analyzed 502 images and generated a suggested list of ten potential syndromes. The list included the patient’s actual disease 91 percent of the time, according to the researchers.&lt;br /&gt;&lt;br /&gt;In the paper, Gurovich and his colleagues warn of potential for misuse of the images. “Unlike genomic data, facial images are easily accessible. Payers or employers could potentially analyze facial images and discriminate based on the probability of individuals having pre-existing conditions or developing medical complications,” the authors wrote. They suggest implementing monitoring strategies, such as recording digital footprints on a blockchain, to prevent abuse.&lt;br /&gt;&lt;br /&gt;FDNA’s system did not require regulatory approval from the U.S. Food and Drug Administration because it’s considered a reference tool, according to the company.&lt;br /&gt;&lt;br /&gt;Yael, who lives in Israel, has become the face of FDNA, with her picture on the home page of the company’s website. An FDNA spokesperson says Yael’s parents are actively searching for more people like Yael who have MR XL Bain Type. There is no treatment yet available for the disease&lt;span style="font-family: Times, &amp;quot;Times New Roman&amp;quot;, serif; font-size: large;"&gt;.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg1ZyOSr53e6PDoNZIGZVFP6eOP7DSjW0PICkwWay5iYkJ1gl0S5kWoWflkf_pke-7jWnz_IzALGC9oFqijlOkyzquPE5I1244CCOU1sau4mXN4SSbgAzPc0Tz0mv6KIGCeQ0uXGCpA8Dr_/s72-c/MzIwODAxMQ1.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Cherthala, Kerala, India</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">9.6836368 76.336537700000008</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">9.6210243 76.25585670000001 9.7462493 76.4172187</georss:box><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>New 3-D chip combines computing and data storage.</title><link>https://www.engineeringinsights.in/2018/12/new-3-d-chip-combines-computing-and.html</link><category>Inventive Inventions</category><category>Material Science and Engineering</category><category>Trending Technologies</category><pubDate>Tue, 11 Dec 2018 00:45:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-1582200020966829584</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIgMNuVTFDOLlR3fibJ6EQvbk_joNVkqq6GO2lHpPucZBp93esDJTtcfYCp34eDdB2j3zk7uLOrZDrv_cCvm-Nlt-3SuyRs5xU2mNV-X_lOSZaey2VdYDqMqICB8eKgetIlnEhZtIFppXb/s1600/MIT-NanotechChip_0.jpg" /&gt;&lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;&lt;/span&gt;&lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.&lt;/span&gt;&lt;/div&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature, by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“The new 3-D computer architecture provides dense and fine-grained integration of computing and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”&amp;nbsp;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
This work was funded by the Defense Advanced Research Projects Agency, the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.&lt;/div&gt;
&lt;/span&gt;&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiIgMNuVTFDOLlR3fibJ6EQvbk_joNVkqq6GO2lHpPucZBp93esDJTtcfYCp34eDdB2j3zk7uLOrZDrv_cCvm-Nlt-3SuyRs5xU2mNV-X_lOSZaey2VdYDqMqICB8eKgetIlnEhZtIFppXb/s72-c/MIT-NanotechChip_0.jpg" width="72"/><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Engineers produce smallest 3-D transistor yet.</title><link>https://www.engineeringinsights.in/2018/12/engineers-produce-smallest-3-d.html</link><category>Electronics</category><category>Nanoscience and Nanotechnology</category><category>Trending Technologies</category><pubDate>Tue, 11 Dec 2018 00:32:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-73464782630603455</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div class="separator" style="clear: both;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;b&gt;&lt;i&gt;Process that modifies semiconductor material atom by atom could enable higher-performance electronics&lt;/i&gt;&lt;/b&gt;&lt;/span&gt;&lt;span style="background-color: white; color: #222222; font-family: nimbus-sans, sans-serif, Arial, Verdana; font-size: 26.25px; text-align: start;"&gt;&lt;i&gt;.&lt;/i&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmFxbs31Gdvk-kUVlv7fF15Dn3otBB21ZxHWElx6uyUfQqKCGtkYW9CorkgZkjAwBtdIRu-wUOQ2tt_OUtj0HAHL29LSPqJ4EuK3VbwDq26U8swb73o-i6sYMJJbHb59SaLb_IiweAyC-8/s1600/MIT-Atomic-Fabrication_2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="426" data-original-width="639" height="425" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmFxbs31Gdvk-kUVlv7fF15Dn3otBB21ZxHWElx6uyUfQqKCGtkYW9CorkgZkjAwBtdIRu-wUOQ2tt_OUtj0HAHL29LSPqJ4EuK3VbwDq26U8swb73o-i6sYMJJbHb59SaLb_IiweAyC-8/s640/MIT-Atomic-Fabrication_2.jpg" width="640" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, &amp;quot;Times New Roman&amp;quot;, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, &amp;quot;Times New Roman&amp;quot;, serif; font-size: large;"&gt;Researchers from MIT and the University of Colorado have fabricated a 3-D transistor that’s less than half the size of today’s smallest commercial models. To do so, they developed a novel microfabrication technique that modifies semiconductor material atom by atom.&lt;/span&gt;&lt;/div&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The inspiration behind the work was to keep up with Moore’s Law, an observation made in the 1960s that the number of transistors on an integrated circuit doubles about every two years. To adhere to this “golden rule” of electronics, researchers are constantly finding ways to cram as many transistors as possible onto microchips. The newest trend is 3-D transistors that stand vertically, like fins, and measure about 7 nanometers across — tens of thousands of times thinner than a human hair. Tens of billions of these transistors can fit on a single microchip, which is about the size of a fingernail.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
As described in a paper presented at this week’s IEEE International Electron Devices Meeting, the researchers modified a recently invented chemical-etching technique, called thermal atomic level etching (thermal ALE), to enable precision modification of semiconductor materials at the atomic level. Using that technique, the researchers fabricated 3-D transistors that are as narrow as 2.5 nanometers and more efficient than their commercial counterparts.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Similar atomic-level etching methods exist today, but the new technique is more precise and yields higher-quality transistors. Moreover, it repurposes a common microfabrication tool used for depositing atomic layers on materials, meaning it could be rapidly integrated. This could enable computer chips with far more transistors and greater performance, the researchers say.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
“We believe that this work will have great real-world impact,” says first author Wenjie Lu, a graduate student in MIT’s Microsystems Technology Laboratories (MTL). “As Moore’s Law continues to scale down transistor sizes, it is harder to manufacture such nanoscale devices. To engineer smaller transistors, we need to be able to manipulate the materials with atomic-level precision.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Joining Lu on the paper are: Jesus A. del Alamo, a professor of electrical engineering and computer science and an MTL researcher who leads the Xtreme Transistors Group; recent MIT graduate Lisa Kong ’18; MIT postdoc Alon Vardi; and Jessica Murdzek, Jonas Gertsch, and Professor Steven George of the University of Colorado.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;b&gt;Atom by atom&lt;/b&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Microfabrication involves deposition (growing film on a substrate) and etching (engraving patterns on the surface). To form transistors, the substrate surface gets exposed to light through photomasks with the shape and structure of the transistor. All material exposed to light can be etched away with chemicals, while material hidden behind the photomask remains.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The state-of-the-art techniques for microfabrication are known as atomic layer deposition (ALD) and atomic layer etching (ALE). In ALD, two chemicals are deposited onto the substrate surface and react with one another in a vacuum reactor to form a film of desired thickness, one atomic layer at a time.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Traditional ALE techniques use plasma with highly energetic ions that strip away individual atoms on the material’s surface. But these cause surface damage. These methods also expose material to air, where oxidization causes additional defects that hinder performance.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In 2016, the University of Colorado team invented thermal ALE, a technique that closely resembles ALD and relies on a chemical reaction called “ligand exchange.” In this process, an ion in one compound called a ligand — which binds to metal atoms — gets replaced by a ligand in a different compound. When the chemicals are purged away, the reaction causes the replacement ligands to strip away individual atoms from the surface. Still in its infancy, thermal ALE has, so far, only been used to etch oxides.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In this new work, the researchers modified thermal ALE to work on a semiconductor material, using the same reactor reserved for ALD. They used an alloyed semiconductor material, called indium gallium arsenide (or InGaAs), which is increasingly being lauded as a faster, more efficient alternative to silicon.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The researchers exposed the material to hydrogen fluoride, the compound used for the original thermal ALE work, which forms an atomic layer of metal fluoride on the surface. Then, they poured in an organic compound called dimethylaluminum chloride (DMAC). The ligand-exchange process occurs on the metal fluoride layer. When the DMAC is purged, individual atoms follow.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The technique is repeated over hundreds of cycles. In a separate reactor, the researchers then deposited the “gate,” the metallic element that controls the transistors to switch on or off.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
In experiments, the researchers removed just .02 nanometers from the material’s surface at a time. “You’re kind of peeling an onion, layer by layer,” Lu says. “In each cycle, we can etch away just 2 percent of a nanometer of a material. That gives us super high accuracy and careful control of the process.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Because the technique is so similar to ALD, “you can integrate this thermal ALE into the same reactor where you work on deposition,” del Alamo says. It just requires a “small redesign of the deposition tool to handle new gases to do deposition immediately after etching. … That’s very attractive to industry.”&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;b&gt;Thinner, better “fins”&lt;/b&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Using the technique, the researchers fabricated FinFETs, 3-D transistors used in many of today’s commercial electronic devices. FinFETs consist of a thin “fin” of silicon, standing vertically on a substrate. The gate is essentially wrapped around the fin. Because of their vertical shape, anywhere from 7 billion to 30 billion FinFETs can squeeze onto a chip. As of this year, Apple, Qualcomm, and other tech companies started using 7-nanometer FinFETs.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Most of the researchers’ FinFETs measured under 5 nanometers in width — a desired threshold across industry — and roughly 220 nanometers in height. Moreover, the technique limits the material’s exposure to oxygen-caused defects that render the transistors less efficient.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
The device performed about 60 percent better than traditional FinFETs in “transconductance,” the researchers report. Transistors convert a small voltage input into a current delivered by the gate that switches the transistor on or off to process the 1s (on) and 0s (off) that drive computation. Transconductance measures how much energy it takes to convert that voltage.&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
Limiting defects also leads to a higher on-off contrast, the researchers say. Ideally, you want high current flowing when the transistors are on, to handle heavy computation, and nearly no current flowing when they’re off, to save energy. “That contrast is essential in making efficient logic switches and very efficient microprocessors,” del Alamo says. “So far, we have the best ratio [among FinFETs].”&lt;/div&gt;
&lt;/span&gt;&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmFxbs31Gdvk-kUVlv7fF15Dn3otBB21ZxHWElx6uyUfQqKCGtkYW9CorkgZkjAwBtdIRu-wUOQ2tt_OUtj0HAHL29LSPqJ4EuK3VbwDq26U8swb73o-i6sYMJJbHb59SaLb_IiweAyC-8/s72-c/MIT-Atomic-Fabrication_2.jpg" width="72"/><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Micrometer-Scale Mechanical Switches Work at Just 50 Millivolts</title><link>https://www.engineeringinsights.in/2018/12/micrometer-scale-mechanical-switches.html</link><category>Instrumentation</category><category>MEMS</category><category>Trending Technologies</category><pubDate>Fri, 7 Dec 2018 20:52:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-6060352290598746432</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;span style="font-size: large;"&gt;&lt;i&gt;Energy harvesting IoT chips could compute with low power relays.&lt;/i&gt;&lt;/span&gt;&lt;div&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;div style="text-align: left;"&gt;
&lt;img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq3-lwL65oVD30SgenFdcMv_9kJ_p7TRmtH4BuseFDrFqpeMFPdogFAldpUWmsJvWQf8UcEmCdf6hhloXcttliRjXfyEJleGFY2oWSuTMVOibSKUIUrkZQtowdmaij2zKp2yTa8TNNkyUJ/s640/MzE4NjI5Ng.jpg" /&gt;&lt;div&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, &amp;quot;Times New Roman&amp;quot;, serif; font-size: large;"&gt;Experts dream that one day much of the Internet of Things (IoT) will power itself. But the trickle of energy most prototype systems can gather from the environment through ambient heat, light, radio waves, or even the &lt;/span&gt;&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;metabolism of bacteria&lt;/span&gt;&lt;span style="font-family: Times, &amp;quot;Times New Roman&amp;quot;, serif; font-size: large;"&gt; don’t easily give you enough voltage to power today’s transistors.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;One solution: ditch the transistors in favor of micrometer-scale mechanical switches. According to research presented this week at the IEEE International Electron Device Meeting, nanoelectromechanical (NEM) relays can switch using just 50 millivolts, that’s about 1/15th of what’s used on today’s processors.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;An inherent property of CMOS transistors called the subthreshold slope sets a lower limit to how little voltage you can use to turn a transistor on, explains Alice Ye, a graduate student at University of California, Berkeley in the laboratory of IEEE Fellow Tsu-Jae King Liu. But as manufacturers push closer to this limit, it becomes harder to turn transistors completely off. That is, current leaks across them even when they’re supposed to be turned off, wasting power.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;“Ideally, you want a device with close to no off-state leakage and zero subthreshold swing,” says Ye. And, ideally, that’s what a NEM relay can deliver.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;Ye presented research on relays that come closer to that ideal than ever before. The relays are basically thin, square platforms suspended by springs. Voltage applied to the platform—called the gate to mirror a transistor’s parts—pulls the platform down, contacting two sets of electrodes and allowing current to flow. Remove the voltage, and the gate springs back up, breaking the connection.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;Liu’s lab has been researching NEMS relays for more than a decade, and the original versions were much less concerned with low-voltage operation. But in the past few years they have been working toward driving the supply voltage as low as it can go. That’s involved two innovations. The first was to “bias” the NEMS body. That is, they set a steady, unchanging voltage beneath the device. With this bias voltage set, it takes much less voltage on the gate to cause the relay to snap down onto the contacts.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;The second innovation had to do with the contacts. Once the gate has slammed down on them, the metal-metal contact requires a bit of extra force to break. In practice this means that a relay that switches on at 200 millivolts, might not turn off until you reduce the voltage to 100 millivolts. To reduce this difference, called hysteresis voltage, Liu’s team first redesigned the switch to have two contacts instead of four. They also added a step to the manufacturing process that coats the surfaces in a single-molecule thick layer of lubricant. “It’s similar to Teflon so it has very low adhesion,” says Ye.&lt;/span&gt;&lt;/div&gt;
&lt;div&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;Combined, these reduced hysteresis voltage to an acceptable level, but at a cost. Instead of sharply switching on an off, the device now has a slight subthreshold swing, because the contacts have to squish the lubricant layer. Even so, the resulting devices could operate at 50 millivolts and be combined to form several types of logic gates.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;Relays lend themselves to a different form of logic than CMOS transistors. Called pass-gate logic, it requires fewer devices to achieve the same output. Using earlier versions of the devices her group built multiple-gate systems including a 32-bit adder. “We know we can make these very complex,” says Liu.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;Circuits made from NEMS relays have other advantages besides extremely low-voltage requirements, says Liu. For one, their switching characteristics should be stable over a wider range of temperatures than silicon systems. They also are inherently tolerant of radiation.&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, Times New Roman, serif; font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: Times, &amp;quot;Times New Roman&amp;quot;, serif; font-size: large;"&gt;Liu’s team’s immediate next steps are to further reduce the relay’s operating voltage down to 10 mv. “I’m pretty optimistic” about this goal, says Liu. They are also working to integrate relays into standard CMOS chips. To do this, they’ve designed the relays so they can be built vertically to fit within the dozen or so levels of interconnect wiring that are stacked above the silicon in modern processors. Such hybrid systems could continually operate at a low level and then engage the main processor when triggered by the right event.&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgq3-lwL65oVD30SgenFdcMv_9kJ_p7TRmtH4BuseFDrFqpeMFPdogFAldpUWmsJvWQf8UcEmCdf6hhloXcttliRjXfyEJleGFY2oWSuTMVOibSKUIUrkZQtowdmaij2zKp2yTa8TNNkyUJ/s72-c/MzE4NjI5Ng.jpg" width="72"/><georss:featurename xmlns:georss="http://www.georss.org/georss">Kochi, Kerala, India</georss:featurename><georss:point xmlns:georss="http://www.georss.org/georss">9.9312328 76.267304100000047</georss:point><georss:box xmlns:georss="http://www.georss.org/georss">9.4307463000000009 75.621857100000042 10.4317193 76.912751100000051</georss:box><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Analog Electronics | Behzad Razavi</title><link>https://www.engineeringinsights.in/2018/11/analog-electronics-behzad-razavi.html</link><category>Tutorial Spot</category><pubDate>Sun, 25 Nov 2018 17:08:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-7764351325053246848</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div dir="ltr" trbidi="on"&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5Pf10DAd7PZr0vW56FGG31GLtyNh3x_ozYWcbpkHsOAcT7zHC7JzsDsTmiEXKUNhP52cdTr1OGavpe_wSWUUUY8CAaBGjNeaWthUnz9gI1WfymffW_Z3VSRQ25yXVGVhaQeHFjqAhtuq3/s1600/1244112-P3F0H4-156.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="261" data-original-width="580" height="144" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5Pf10DAd7PZr0vW56FGG31GLtyNh3x_ozYWcbpkHsOAcT7zHC7JzsDsTmiEXKUNhP52cdTr1OGavpe_wSWUUUY8CAaBGjNeaWthUnz9gI1WfymffW_Z3VSRQ25yXVGVhaQeHFjqAhtuq3/s320/1244112-P3F0H4-156.jpg" width="320" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;&lt;/span&gt;&lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;Get an insight into the world of Digital electronics.These video classes have been developed by &lt;i&gt;&lt;b&gt;Behzad Razavi&lt;/b&gt;&lt;/i&gt;&amp;nbsp;who is a pioneer in the field of electronics. His lectures are considered as a an encyclopedia for those who love electronics. He is also an author of many Academic publications. Happy learning.&lt;/span&gt;&lt;/div&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif; font-size: large;"&gt;
&lt;/span&gt;&lt;style&gt;.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }&lt;/style&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;div class="embed-container"&gt;
&lt;iframe allowfullscreen="" frameborder="0" src="https://www.youtube.com/embed/videoseries?list=PL7qUW0KPfsIIOPOKL84wK_Qj9N7gvJX6v"&gt;&lt;/iframe&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;br /&gt;&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg5Pf10DAd7PZr0vW56FGG31GLtyNh3x_ozYWcbpkHsOAcT7zHC7JzsDsTmiEXKUNhP52cdTr1OGavpe_wSWUUUY8CAaBGjNeaWthUnz9gI1WfymffW_Z3VSRQ25yXVGVhaQeHFjqAhtuq3/s72-c/1244112-P3F0H4-156.jpg" width="72"/><author>insightsonengineering@gmail.com (Engineering Insights)</author></item><item><title>Analog Electronics TS Series</title><link>https://www.engineeringinsights.in/2018/11/analog-electronics-ts-series.html</link><category>Analog Electronics TS</category><pubDate>Sun, 25 Nov 2018 17:03:00 +0530</pubDate><guid isPermaLink="false">tag:blogger.com,1999:blog-5256235216481034198.post-7220093457781134617</guid><description>&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;div dir="ltr" style="text-align: left;" trbidi="on"&gt;
&lt;link href="https://www.w3schools.com/w3css/4/w3.css" rel="stylesheet"&gt;&lt;/link&gt;
&lt;link href="https://www.w3schools.com/w3css/4/w3.css" rel="stylesheet"&gt;&lt;/link&gt;
    &lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL3hYz-9E3wmViRrO-cqlPZQBcuQjA1pM_V20fzhDJ-xiqhEQb7LBLT3T3tn46aOMRuHb6DVZV10IKci-1YV1WIvh2aJNGIvS5XIOmZx0y4iLssZL1-r6Gy_X3x9Kv74vxHr2m9eG4-qOU/s320/Analog+Elec.jpg.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="261" data-original-width="580" height="144" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL3hYz-9E3wmViRrO-cqlPZQBcuQjA1pM_V20fzhDJ-xiqhEQb7LBLT3T3tn46aOMRuHb6DVZV10IKci-1YV1WIvh2aJNGIvS5XIOmZx0y4iLssZL1-r6Gy_X3x9Kv74vxHr2m9eG4-qOU/s320/Analog+Elec.jpg.png" width="320" /&gt;&lt;/a&gt;&lt;/div&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;
&lt;span style="font-size: large;"&gt;Get an insight into the world of Analog electronics. Here Engineering Insights introduces you to various learning materials prepared by different individuals/institutions/organizations. In the day of big data, we are ready to help you by rating the materials based on different criterias which a learner always looks for through advanced machine learning techniques. Since we are in budding stage kindly bare the mistakes. Happy learning. &lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-size: large;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div class="card-grid"&gt;
&lt;!-- Copy Section --&gt;

&lt;br /&gt;
&lt;div class="card-wrap"&gt;
&lt;div class="card"&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
&lt;div class="separator" style="clear: both; text-align: center;"&gt;
&lt;/div&gt;
&lt;a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFV9u6wwf6nDyncZT2XUHcrHVs1BhOw-Qap3v3j1LkRzvKfrwWmHKJf5kWHUvl8N3YPUi-NkLMQ_ZOVTltnWofrSNZO_BTQRWvV2HIZT5sqnT-wW4_O_W4S5YPWes6F_mIIjmMuMf4otyj/s400/Behzad+Analog.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"&gt;&lt;img border="0" data-original-height="400" data-original-width="400" height="132" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjFV9u6wwf6nDyncZT2XUHcrHVs1BhOw-Qap3v3j1LkRzvKfrwWmHKJf5kWHUvl8N3YPUi-NkLMQ_ZOVTltnWofrSNZO_BTQRWvV2HIZT5sqnT-wW4_O_W4S5YPWes6F_mIIjmMuMf4otyj/s200/Behzad+Analog.jpg" width="200" /&gt;&lt;/a&gt;&lt;br /&gt;
&lt;span style="font-size: large;"&gt;&lt;b&gt;Analog&amp;nbsp;&lt;/b&gt;&lt;/span&gt;&lt;br /&gt;
&lt;span style="font-size: large;"&gt;&lt;b&gt;Electronics&lt;/b&gt;&lt;/span&gt;&lt;b style="font-size: x-large;"&gt;&amp;nbsp;&lt;/b&gt;&lt;br /&gt;
&lt;b&gt;by Behzad Razavi&lt;/b&gt;&lt;br /&gt;
&lt;div&gt;
&lt;div style="text-align: right;"&gt;
Beginner &lt;span class="w3-badge w3-teal"&gt;8&lt;/span&gt;|&lt;/div&gt;
&lt;div style="text-align: right;"&gt;
Optimal&amp;nbsp;&amp;nbsp;&lt;span class="w3-badge w3-red"&gt;7&lt;/span&gt;|&lt;/div&gt;
&lt;div style="text-align: right;"&gt;
Detailed&amp;nbsp;&amp;nbsp;&lt;span class="w3-badge w3-teal"&gt;6&lt;/span&gt;|&lt;/div&gt;
&lt;div style="text-align: right;"&gt;
Overall&amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;span class="w3-badge w3-red"&gt;8&lt;/span&gt;|&lt;/div&gt;
&lt;span class="w3-tag w3-grey"&gt;Basics &lt;/span&gt; &lt;span class="w3-tag w3-teal"&gt;Bipolar &lt;/span&gt; &lt;span class="w3-tag w3-grey"&gt;MOS &lt;/span&gt;&lt;br /&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif;"&gt;&lt;br /&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div style="text-align: justify;"&gt;
&lt;span style="font-family: &amp;quot;times&amp;quot; , &amp;quot;times new roman&amp;quot; , serif;"&gt;Analog electronics are electronic systems with a continuously variable signal, in contrast to digital electronics where signals usually take only two levels.&lt;/span&gt;&lt;span style="background-color: white; font-family: &amp;quot;open sans&amp;quot; , sans-serif; font-size: 14px;"&gt;&lt;span style="background-color: white; font-size: 14px;"&gt;.&amp;nbsp;&lt;/span&gt;&lt;/span&gt;&lt;a href="https://www.engineeringinsights.in/2018/11/analog-electronics-behzad-razavi.html" target="_blank"&gt;Learn Now&lt;/a&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;!-- Copy Complete --&gt;



&lt;style&gt;
*, *:after, *:before {
  -webkit-box-sizing: border-box;
  -moz-box-sizing: border-box;
  box-sizing: border-box;
}
body {
  background:#f0f0f0;
}
img {
  max-width:100%;
}



.card-grid {
  width:100%;
}
.card-grid:after {
  content: "";
  display: table;
  clear: both;
}



.card-wrap {
  float:left;
  width 100%;
  padding:1px;
}
@media (min-width: 100px) {
  .card-wrap {
    width:100%;
  }
}
@media (min-width: 520px) {
  .card-wrap {
    width:100%;
  }
}


.card {
  background-color:white;
  border-radius:2px;
  border:1px solid #ccc;
  border-bottom:2px solid #ccc;
}
.card &gt; div {
  padding:0em 1em;
}
&lt;/style&gt;

&lt;/div&gt;
</description><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" height="72" url="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhL3hYz-9E3wmViRrO-cqlPZQBcuQjA1pM_V20fzhDJ-xiqhEQb7LBLT3T3tn46aOMRuHb6DVZV10IKci-1YV1WIvh2aJNGIvS5XIOmZx0y4iLssZL1-r6Gy_X3x9Kv74vxHr2m9eG4-qOU/s72-c/Analog+Elec.jpg.png" width="72"/><author>insightsonengineering@gmail.com (Engineering Insights)</author></item></channel></rss>