<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ApplySci &#8211; Deep Tech Health + Neurotech</title>
	<atom:link href="https://applysci.com/feed/" rel="self" type="application/rss+xml" />
	<link>https://applysci.com</link>
	<description>DEEP TECH HEALTH + NEUROTECH</description>
	<lastBuildDate>Wed, 26 Mar 2025 15:30:19 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Sponsor ApplySci NYC 2025</title>
		<link>https://applysci.com/test/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Tue, 27 Feb 2024 14:01:19 +0000</pubDate>
				<category><![CDATA[Conference]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8867</guid>

					<description><![CDATA[ApplySci is bringing its neurotech conference back to NYC in 2025. If you are interested in partnering, sponsoring, or speaking, please contact Lisa Weiner Intrator. We are delighted to be back in New York, after 8 years at Stanford and MIT, and hope that you will join our effort.]]></description>
										<content:encoded><![CDATA[
<p>ApplySci is bringing its neurotech conference back to NYC in 2025.  If you are interested in partnering, sponsoring, or speaking, please contact Lisa Weiner Intrator.  We are delighted to be back in New York, after 8 years at Stanford and MIT, and hope that you will join our effort.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Computer vision enhanced sensors improve rehabilitation</title>
		<link>https://applysci.com/computer-vision-enhanced-sensors-improve-rehabilitation/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Mon, 11 Sep 2023 15:32:02 +0000</pubDate>
				<category><![CDATA[Computer Vision]]></category>
		<category><![CDATA[Sensors]]></category>
		<category><![CDATA[computer vision]]></category>
		<category><![CDATA[Featured]]></category>
		<category><![CDATA[rehabilitation]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8853</guid>

					<description><![CDATA[POSTECH professor Sung-Min Park and researcher Sunguk Hong have developed optical sensor technology, integrated with computer vision, to track muscle movements in rehab patients with limited mobility. The sensors are flexible, lightweight, and able to gauge subtle body changes, while overcoming the conventional challenges of soft strain sensors &#8211; inadequate durability due to temperature and [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p><a href="https://international.postech.ac.kr">POSTECH</a> professor <a href="https://eee.postech.ac.kr/professor_type/park-sung-min/">Sung-Min Park</a> and researcher <a href="https://scholar.google.co.kr/citations?user=xzAMkycAAAAJ&amp;hl=ko">Sunguk Hong</a> have developed <a href="https://www.nature.com/articles/s41528-023-00264-1">optical sensor technology, integrated with computer vision, to track muscle movements in rehab patients with limited mobility</a>. </p>



<p>The sensors are flexible, lightweight, and able to gauge subtle body changes, while overcoming the conventional challenges of soft strain sensors &#8211;  inadequate durability due to temperature and humidity.</p>



<p>The combined technologies enable the analysis of microscale optical patterns, sensing changes, and are not reliant on electrical signals, like conventional sensors.</p>



<p>The CVOS sensors detect three-axial rotational movements through real-time multiaxial strain mapping. This enables precise recognition of various intricate motions through a single sensor, with AI correction of error factors during signal detection.  </p>



<p></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI decodes brain tumor DNA during surgery</title>
		<link>https://applysci.com/ai-decodes-brain-tumor-dna-during-surgery/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Mon, 14 Aug 2023 12:03:33 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Brain]]></category>
		<category><![CDATA[Cancer]]></category>
		<category><![CDATA[Featured]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8847</guid>

					<description><![CDATA[Kun-Hsing Yu and HMS colleagues used AI to rapidly determine a brain tumor&#8217;s molecular identity during surgery, propeling the development of precision oncology. The tool is CHARM (Cryosection Histopathology Assessment and Review Machine.) Currently, genetic sequencing takes days to weeks.   Accurate molecular diagnosis during surgery can help a neurosurgeon decide how much brain tissue [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p><a href="https://dbmi.hms.harvard.edu/people/kun-hsing-yu">Kun-Hsing Yu</a> and HMS colleagues<a href="https://www.cell.com/med/fulltext/S2666-6340(23)00189-7"> used AI to rapidly determine a brain tumor&#8217;s molecular identity during surgery</a>, propeling the development of precision oncology.  The tool is CHARM (Cryosection Histopathology Assessment and Review Machine.) Currently, genetic sequencing takes days to weeks.  </p>



<p>Accurate molecular diagnosis during surgery can help a neurosurgeon decide how much brain tissue to remove. Removing too much when the tumor is less aggressive can affect a patient’s neurologic and cognitive function. Removing too little when the tumor is highly aggressive may leave behind malignant tissue that can grow and spread quickly. </p>



<p>The technology will also allow the  surgeon to determine if the patient can benefit from immediate treatment with drug-coated wafers placed directly into the brain at the time of the operation.</p>



<p>The standard intraoperative diagnostic approach used now involves taking brain tissue, freezing it, and examining it under a microscope. A major drawback is that freezing the tissue tends to alter the appearance of cells under a microscope and can interfere with the accuracy of clinical evaluation. Furthermore, the human eye, even when using potent microscopes, cannot reliably detect subtle genomic variations on a slide.</p>



<p>The new AI approach overcomes these challenges and could be particularly valuable in areas with limited access to technology to perform rapid cancer genetic sequencing.</p>



<p>Knowledge of a tumor’s molecular type provides insight about its aggressiveness, behavior, and likely response to various treatments, which can inform post-operative decisions.</p>



<p>The new tool enables during-surgery diagnoses aligned with the WHO classification system for diagnosing and grading the severity of gliomas, which calls for such diagnoses to be made based on a tumor’s genomic profile.</p>



<p>CHARM was developed using 2,334 brain tumor samples from 1,524 people with glioma from three different patient populations. When tested on a never-before-seen set of brain samples, the tool distinguished tumors with specific molecular mutations at 93 percent accuracy and successfully classified three major types of gliomas with distinct molecular features that carry different prognoses and respond differently to treatments.</p>



<p>It successfully captured visual characteristics of the tissue surrounding the malignant cells. It was capable of spotting telltale areas with greater cellular density and more cell death within samples, both of which signal more aggressive glioma types.</p>



<p>CHARM was also able to pinpoint clinically important molecular alterations in a subset of low-grade gliomas, a subtype of glioma that is less aggressive and therefore less likely to invade surrounding tissue. Each of these changes also signals different propensity for growth, spread, and treatment response.</p>



<p>It further connected the appearance of the cells — the shape of their nuclei, the presence of edema around the cells — with the molecular profile of the tumor. This means that the algorithm can pinpoint how a cell’s appearance relates to the molecular type of a tumor.</p>



<p>Accorging to Yu, this ability to assess the broader context around the image renders the model more accurate and closer to how a human pathologist would visually assess a tumor sample.</p>



<p>The researchers said that while the model was trained and tested on glioma samples, it could be successfully retrained to identify other brain cancer subtypes. </p>



<p>Scientists have already designed AI models to profile other types of cancer — colon, lung, breast — but gliomas have remained particularly challenging due to their molecular complexity and huge variation in tumor cells’ shape and appearance.</p>



<p>The CHARM tool would have to be retrained periodically to reflect new disease classifications as they emerge from new knowledge. “Just like human clinicians who must engage in ongoing education and training, AI tools must keep up with the latest knowledge to remain at peak performance,” according to Yu.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Join <a href="http://boston.applysci.com">ApplySci at MIT</a> on September 18, 2023 for the 14th AI + Deep Tech Health + Neurotech conference.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Wearable ultrasound to detect early breast cancer</title>
		<link>https://applysci.com/wearable-ultrasound-to-detect-early-breast-cancer/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Wed, 09 Aug 2023 15:17:50 +0000</pubDate>
				<category><![CDATA[Cancer]]></category>
		<category><![CDATA[Ultrasound]]></category>
		<category><![CDATA[Featured]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8840</guid>

					<description><![CDATA[MIT&#8217;s Canan Dagdeviren has developed a flexible ultrasound patch that can be attached to a bra, obtaining ultrasound images with resolution comparable to medical imaging centers, and used repeatedly. Interval cancers, which develop between regularly scheduled mammograms, account for 20 &#8211; 30 percent of all breast cancers, and tend to be more aggressive. The goal [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>MIT&#8217;s <a href="https://www.media.mit.edu/people/canand/overview/">Canan Dagdeviren</a> has developed a <a href="https://www.science.org/doi/10.1126/sciadv.adh5325">flexible ultrasound patch that can be attached to a bra</a>, obtaining ultrasound images with resolution comparable to medical imaging centers, and used repeatedly.</p>



<p>Interval cancers, which develop between regularly scheduled mammograms, account for 20 &#8211; 30 percent of all breast cancers, and  tend to be more aggressive. The goal is to frequently screen those most likely to develop interval cancers.</p>



<p>According to Dagdeviren: “We changed the form factor of the ultrasound technology so that it can be used in your home. It’s portable and easy to use, and provides real-time, user-friendly monitoring of breast tissue.”</p>



<p>Piezoelectric material allowed the scanner to be minimized, in a flexible, 3D-printed patch, with honeycomb shaped openings. Using magnets, it is attached to a bra with  openings that allow the it to contact the skin. The scanner fits inside a small tracker, with six different positions, allowing the entire breast to be imaged. IT also rotates, to take images from different angles.</p>



<p>The device was able to detect .3cm diameter cysts in a 71-year-old woman, with a resolution comparable to that of traditional ultrasound. Tissue was imaged at a depth up to 8 centimeters. </p>



<p>To see the images, the scanner must connect to an ultrasound machine, like those used in imaging centers. The team is now working on a miniaturized imaging system.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Join ApplySci at MIT on September 18, 2023 for <a href="http://Boston.applysci.com">AI + Deep Tech Health + Neurotech Boston</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI detects diabetes from fatty tissue in chest x rays</title>
		<link>https://applysci.com/ai-detects-diabetes-from-fatty-tissue-in-chest-x-rays/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Mon, 07 Aug 2023 16:37:53 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Diabetes]]></category>
		<category><![CDATA[Featured]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8836</guid>

					<description><![CDATA[Judy Wawira Gichoya and Emory colleagues have developed an AI model that detects warning signs for diabetes in x rays collected during routine exams. The signs were also detected in patients who do not meet elevated risk guidelines. Applying deep learning to images and&#160;electronic health record&#160;data,&#160;the model that&#160;successfully&#160;flagged elevated diabetes risk in&#160;a&#160;retrospective analysis,&#160;often years before&#160;patients&#160;were [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p><a href="https://winshipcancer.emory.edu/bios/faculty/gichoya-judy.html">Judy Wawira Gichoya</a> and Emory colleagues have developed an <a href="https://www.nature.com/articles/s41467-023-39631-x">AI model that detects warning signs for diabetes in x rays collected during routine exams</a>.  The signs were also detected in patients who do not meet elevated risk guidelines.</p>



<p>Applying deep learning to images and&nbsp;electronic health record&nbsp;data,&nbsp;the model that&nbsp;successfully&nbsp;flagged elevated diabetes risk in&nbsp;a&nbsp;retrospective analysis,&nbsp;often years before&nbsp;patients&nbsp;were diagnosed.</p>



<p>Current guidelines suggest screening patients for type 2 diabetes&nbsp;if they are between&nbsp;35&nbsp;and 70&nbsp;years&nbsp;old&nbsp;and&nbsp;have a body mass index (BMI) in the overweight to obese range. This strategy misses a significant number of cases, particularly in racial/ethnic minorities for whom BMI is a less effective predictor of&nbsp;diabetes risk.</p>



<p>Each year, millions of Americans receive chest x-rays&nbsp;for&nbsp;chest pain, difficulty breathing,&nbsp;injury or before surgeries.&nbsp;While radiologists are not looking for diabetes when they assess these x-rays, the images become part of a patient’s medical record and could be analyzed later for diabetes&nbsp;or&nbsp;other conditions.</p>



<p>The&nbsp;AI&nbsp;model was trained&nbsp;on more than 270,000&nbsp;x-ray images&nbsp;from 160,000 patients, with deep learning determining the&nbsp;image features that best predicted a later diagnosis of diabetes. Because chest x-rays are not a common way to detect diabetes, the researchers also used explainable AI techniques to determine how and why the model made its determinations. The methods pointed to&nbsp;the location of fatty tissue as important for determining risk, a&nbsp;logic that aligns with recent medical findings that&nbsp;visceral fat in the&nbsp;upper body and abdomen&nbsp;is associated with&nbsp;type 2&nbsp;diabetes, insulin resistance, hypertension and other conditions.</p>



<p>When the Emory team applied the model to a separate group of nearly 10,000 patients, they found the model predicted risk&nbsp;better than a simple model based on non-image clinical data alone.&nbsp;&nbsp;</p>



<p>In some cases, the chest x-ray warned of high diabetes risk&nbsp;as&nbsp;early&nbsp;as three years before the patient eventually received a diagnosis. The model’s output also provides a numerical risk score that could potentially help clinicians customize the treatment approach for patients</p>



<p>The research team will now explore how to further validate&nbsp;the model and incorporate it into electronic health record systems so it can provide an alert to physicians to pursue&nbsp;traditional&nbsp;diabetes screening of patients flagged as high risk based on&nbsp;x-ray results.&nbsp;</p>



<p>They’ll then turn to investigating how well&nbsp;chest x-rays can help diagnose other conditions,&nbsp;such as&nbsp;vascular disease,&nbsp;congestive heart failure, and chronic obstructive&nbsp;pulmonary disease.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Join <a href="http://Boston.applysci.com">ApplySci at MIT</a> for the 14th <a href="http://Boston.applysci.com">AI + Deep Tech Health + Neurotech conference</a> on September 18, 2023</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI-based device increases ademona detection during colonoscopy</title>
		<link>https://applysci.com/ai-based-device-increases-ademona-detection-during-colonoscopy/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Tue, 01 Aug 2023 14:11:24 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Cancer]]></category>
		<category><![CDATA[Featured]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8832</guid>

					<description><![CDATA[MAGENTIQ-COLO is an AI based FDA cleared colonoscopy device which offers a significant increase in Adenoma Detection Rate. Current high rates of missed and undetected adenomas during colonoscopy means that even regularly screened patients are still at risk of developing colon cancer. A missed polyp can lead to interval cancer, which accounts for approximately 8% [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p><a href="https://www.magentiq.com">MAGENTIQ-COLO</a> is an AI based FDA cleared colonoscopy device which offers a significant increase in Adenoma Detection Rate. </p>



<p>Current high rates of missed and undetected adenomas during colonoscopy means that even regularly screened patients are still at risk of developing colon cancer. A missed polyp can lead to interval cancer, which accounts for approximately 8% to 10% of all CRC in the U.S., translated to over 13,500 cancer cases that could be prevented every year with better detection.</p>



<p>A 2022 study of 950 patients at 10 hospitals showed that  MAGENTIQ-COLO increasing ADR by 26% relatively, translating to a 21% decrease in CRC occurrence and a 35% decrease in patient mortality.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Join ApplySci at MIT on September 18, 2023 for <a href="http://Boston.applysci.com">AI + Deep Tech Health + Neurotech Boston</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Biosensor detects misfiled proteins in Parkinson&#8217;s and Alzheimer&#8217;s disease</title>
		<link>https://applysci.com/biosensor-detects-misfiled-proteins-in-parkinsons-and-alzheimers-disease/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Wed, 19 Jul 2023 12:31:46 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Brain]]></category>
		<category><![CDATA[Sensors]]></category>
		<category><![CDATA[EPFL]]></category>
		<category><![CDATA[Featured]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8826</guid>

					<description><![CDATA[Hatice Altug, Hilal Lashue, and EPFL colleagues have developed ImmunoSEIRA, an AI-enhanced, biosensing tool for the detection of misfolded proteins linked to Parkinson’s and Alzheimer’s disease. The researchers also claim that neural networks can quantify disease stage and progression. The technology holds promise for early detection, monitoring, and assessing treatment options. Protein misfolding has been [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p><a href="https://people.epfl.ch/hatice.altug">Hatice Altug</a>, <a href="https://www.epfl.ch/labs/lashuel-lab/page-140696-en-html/">Hilal Lashue</a>, and EPFL colleagues have developed <a href="https://www.science.org/doi/10.1126/sciadv.adg9644">ImmunoSEIRA</a>, an AI-enhanced, biosensing tool for the detection of misfolded proteins linked to Parkinson’s and Alzheimer’s disease.  The researchers also claim that neural networks can quantify disease stage and progression.</p>



<p>The technology holds promise for early detection, monitoring, and assessing treatment options.</p>



<p>Protein misfolding has been identified as a key event in disease progression. It is thought that healthy proteins misfold first into oligomers , and then into fibrils in later stages. </p>



<p>According to Lashuel, &#8220;unlike current biochemical approaches which rely on measuring the levels of these molecules, our approach is focused on detecting their abnormal structures. This technology also allows us to differentiate the levels of oligomers and fibrils.&#8221;</p>



<p>The sensor uses gold nanorod arrays with antibodies for specific protein detection, enabling real-time capture and structural analysis of target biomarkers from very small samples. Neural networks identify the presence of specific misfolded protein forms. Lashuel believes that  “since the disease process is tightly associated with changes in protein structure, we believe that structural biomarkers, especially when integrated with other biochemical and neurodegeneration biomarkers, could pave the way for more precise diagnosis and monitoring of disease progression.”</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Join <a href="http://Boston.applysci.com">ApplySci at MIT on September 18, 2023</a>, for a scientist-led day of AI, Deep Tech Health , and Neurotech</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI predicts pancreatic cancer</title>
		<link>https://applysci.com/ai-to-predict-pancreatic-cancer/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Tue, 30 May 2023 15:21:41 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Cancer]]></category>
		<category><![CDATA[Featured]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8818</guid>

					<description><![CDATA[Harvard professor Chris Sander used clinical data from 6 million patients in Denmark’s national health system and 3 million in the U.S. VA system to train an AI model to predict the occurrence of pancreatic cancer within 3, 6, 12, and 36 months. This could allow wider screening for the aggressive disease, which is often [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>Harvard professor <a href="https://www.dfhcc.harvard.edu/insider/member-detail/member/chris-sander-phd/">Chris Sander</a> used clinical data from 6 million patients in Denmark’s national health system and 3 million in the U.S. VA system to train an <a href="https://www.nature.com/articles/s41591-023-02332-5">AI model to predict the occurrence of pancreatic cancer</a> within 3, 6, 12, and 36 months.  This could allow wider screening for the aggressive disease, which is often discovered at a late stage, including in people with no known genetic risk.</p>



<p>The researchers believe that data from  imaging, genetic, and wearable devices could further improve this tool, and that unrelated disease histories, including diabetes and substance abuse have already improved its accuracy.</p>



<hr class="wp-block-separator has-alpha-channel-opacity"/>



<p>Join ApplySci at MIT for the 14th <a href="http://Boston.applysci.com">AI+ Deep Tech + Neurotech</a> conference on the future of healthcare on September 18, 2023</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Brain-spine interface allows paraplegic man to walk</title>
		<link>https://applysci.com/brain-spine-interface-allows-paraplegic-man-to-walk/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Thu, 25 May 2023 19:12:11 +0000</pubDate>
				<category><![CDATA[BCI]]></category>
		<category><![CDATA[Brain]]></category>
		<category><![CDATA[Featured]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8809</guid>

					<description><![CDATA[EPFL professor Grégoire Courtine has created a &#8220;digital bridge&#8221; which has allowed a man whose spinal cord damage left him with paraplegia, to walk. The brain–spine interface builds on previous work, which combined intensive training and a lower spine stimulation implant. Gert-Jan Oskam participated in this trial, but stopped improving after three years. The new [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>EPFL professor <a href="https://people.epfl.ch/gregoire.courtine/?lang=en">Grégoire Courtine</a> has created a &#8220;digital bridge&#8221; which has allowed a man whose spinal cord damage left him with paraplegia, to walk.</p>



<p>The  <a href="https://www.nature.com/articles/s41586-023-06094-5">brain–spine interface</a> builds on previous work, which combined intensive training and a lower spine stimulation implant.  Gert-Jan Oskam participated in this trial, but stopped improving after three years.  The new system pairs the existing implant with two disc-shaped skull implants, with two 64-electrode grids resting against the membrane covering the brain.</p>



<p>When Oskam thinks about walking, the skull implants detect electrical activity in the cortex, and wirelessly decode and transmit it to a computer in his backpack, activating the spinal pulse generator.</p>



<p>The previous device “pre-programmed stimulation” that generated robotic stepping movements. Now, Oskam has full control over the parameter of stimulation, allowing him to stop, walk, and climb stairs.</p>



<p>After 40 rehabilitation sessions with the brain–spine interface, Oskam regained the ability to voluntarily move his legs and feet, which was not possible with the previous implant.  This suggest that the new device prompted further recovery in nerve cells that were not completely severed during his injury. Oskam can also walk short distances without the device, using crutches.</p>



<p>Courtine is now researching the ability of a similar device to restore arm movement.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Hydrogel-coated sutures sense inflammation, can deliver drugs and stem cells</title>
		<link>https://applysci.com/hydrogel-coated-sutures-sense-inflammation-can-deliver-drugs-and-stem-cells/</link>
		
		<dc:creator><![CDATA[lisaweiner]]></dc:creator>
		<pubDate>Fri, 19 May 2023 16:32:22 +0000</pubDate>
				<category><![CDATA[Sensors]]></category>
		<category><![CDATA[Stem Cells]]></category>
		<category><![CDATA[Crohn&#039;s]]></category>
		<category><![CDATA[Featured]]></category>
		<category><![CDATA[Giovanni Traverso]]></category>
		<guid isPermaLink="false">https://applysci.com/?p=8805</guid>

					<description><![CDATA[Giovanni Traverso has designed tough, absorbable, hydrogel-coated sutures, which in addition to holding post-surgery or wound-affected tissue in place, can sense inflammation and deliver drugs, including monoclonal antibodies. They could also be used to deliver stem cells. The sutures were created from pig tissue, “decellularized” with detergents, to reduce the chances of inducing inflammation in [&#8230;]]]></description>
										<content:encoded><![CDATA[
<p></p>



<p><a href="https://engineering.mit.edu/faculty/giovanni-traverso/">Giovanni Traverso</a> has designed <a href="https://www.cell.com/matter/fulltext/S2590-2385(23)00201-1">tough, absorbable, hydrogel-coated sutures, which in addition to holding post-surgery or wound-affected tissue in place, can sense inflammation and deliver drugs, including monoclonal antibodies. They could also be used to deliver stem cells.</a></p>



<p>The sutures were created from pig tissue, “decellularized” with detergents, to reduce the chances of inducing inflammation in host tissue. This process leaves behind a cell-free &#8220;De-gut&#8221; material, which contains structural proteins such as collagen, and other biomolecules found in the extracellular matrix that surrounds cells. Their strength is comparable to commercially available catgut sutures, and the De-gut sutures induce much less of an immune response from surrounding tissue.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
