<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Digital Research</title>
	<atom:link href="https://blogs.nottingham.ac.uk/digitalresearch/feed/" rel="self" type="application/rss+xml" />
	<link>https://blogs.nottingham.ac.uk/digitalresearch/</link>
	<description>We help researchers make the best use of digital technology.</description>
	<lastBuildDate>Sun, 20 Jul 2025 16:38:19 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.7.2</generator>
	<item>
		<title>Introducing Ada, the University of Nottingham’s new, most powerful HPC service</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2024/06/07/introducing-ada-the-university-of-nottinghams-new-most-powerful-hpc-service/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Fri, 07 Jun 2024 07:40:12 +0000</pubDate>
				<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[Ada]]></category>
		<category><![CDATA[HPC]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=14132</guid>

					<description><![CDATA[<p>As one of the UK’s leading research universities, we are renowned for the strength, quality and breadth of our research and teaching capabilities. To further bolster our commitment to research excellence, we are delighted to share that we have implemented a new, state-of-the-art High-Performance Computing (HPC) system. The names of the new facility, “Ada”, is ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2024/06/07/introducing-ada-the-university-of-nottinghams-new-most-powerful-hpc-service/">Introducing Ada, the University of Nottingham’s new, most powerful HPC service</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="171" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-300x171.png" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-300x171.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-1024x585.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-768x439.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-1536x878.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image.png 1792w" sizes="(max-width: 300px) 100vw, 300px" /><p>As one of the UK’s leading research universities, we are renowned for the strength, quality and breadth of our research and teaching capabilities.</p>
<p>To further bolster our commitment to research excellence, we are delighted to share that we have implemented a new, state-of-the-art High-Performance Computing (HPC) system.</p>
<p>The names of the new facility, “Ada”, is a nod to Ada Lovelace, the gifted mathematician recognised for creating the first computer programme. The name is a fitting tribute to this female pioneer, who also had ancestral ties to Nottinghamshire.</p>
<p>Ada replaces our former HPC facility, Augusta, which was in place for five years.</p>
<p><img fetchpriority="high" decoding="async" class="alignnone size-large wp-image-14133" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-1024x585.png" alt="" width="675" height="386" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-1024x585.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-300x171.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-768x439.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image-1536x878.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2024/06/Ada-launch-blog-image.png 1792w" sizes="(max-width: 675px) 100vw, 675px" /></p>
<p>Professor Jonathan Hirst, Professor of Computational Chemistry comments: “technology continues to evolve at an extraordinary pace and it is crucial to ensure our HPC infrastructure can meet growing demand accordingly. Ada will be available for experimentation beyond larger, funded projects, leading to more ambitious research and enabling funding, more high-quality outputs and increased inter-disciplinary collaboration”.</p>
<p>Professor Phil Williams, Professor of Biophysics adds: “the new facility will enhance research capabilities across a wide range of disciplines, fuelling innovation and growth in strategic areas of the University, such as quantum technologies, nanoscience, artificial intelligence, imaging, and bioinformatics. It’s a really exciting step forward for the University of Nottingham”.</p>
<p>&nbsp;</p>
<h2>What is a HPC facility?</h2>
<p>The on-site <a href="https://www.nottingham.ac.uk/dts/researcher/compute-services/high-performance-computing.aspx">High-Performance Computer (HPC)</a> is available to any UoN research student or member of academic staff.</p>
<p>The HPC allows users to process, analyse and store increasingly large amounts of data and perform complex calculations at high speed to enable high quality and valuable research outputs.</p>
<p>The new HPC service will also support computationally intensive research performed in areas of the university that are not traditional users of high-performance computing.</p>
<p>Strategic partner OCF was responsible for the design, development, and installation of the cutting-edge HPC facility.</p>
<p>&nbsp;</p>
<h2>Access, guides, and more information</h2>
<p>For access, guides, and more information, please visit our <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Ada-High-Performance-Compute.-Ada.aspx">dedicated SharePoint pages</a>.</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2024/06/07/introducing-ada-the-university-of-nottinghams-new-most-powerful-hpc-service/">Introducing Ada, the University of Nottingham’s new, most powerful HPC service</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Accelerating the University’s research ambitions with a more powerful HPC service</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2023/05/15/accelerating-the-universitys-research-ambitions-with-a-more-powerful-hpc-service/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Mon, 15 May 2023 15:06:07 +0000</pubDate>
				<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[HPC]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=14116</guid>

					<description><![CDATA[<p>The University’s High Performance Computing (HPC) service is a research facility available to academic staff and research students, from any School or Faculty, who have the need for computing resource substantially greater than a standard PC. Enabling research that changes the world As one of the UK’s leading research universities, UoN is renowned for the ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2023/05/15/accelerating-the-universitys-research-ambitions-with-a-more-powerful-hpc-service/">Accelerating the University’s research ambitions with a more powerful HPC service</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="200" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/23835dtp-300x200.jpg" class="attachment-medium size-medium wp-post-image" alt="Postgraduates looking at a computer screen" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/23835dtp-300x200.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/23835dtp-1024x681.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/23835dtp-768x511.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/23835dtp.jpg 1180w" sizes="auto, (max-width: 300px) 100vw, 300px" /><p><strong>The University’s High Performance Computing (HPC) service is a research facility available to academic staff and research students, from any School or Faculty, who have the need for computing resource substantially greater than a standard PC.</strong></p>
<h3>Enabling research that changes the world</h3>
<p>As one of the UK’s leading research universities, UoN is renowned for the strength of its research, and quality of data, through which ground-breaking scientific discoveries are made and innovations are fuelled.</p>
<p>But, as technology continues to evolve at an exponential pace, it is crucial to ensure our HPC infrastructure can meet growing demand accordingly.</p>
<p>Our current HPC service (Augusta) will reach end of life soon. We are therefore in the process of implementing a more powerful service, to ensure we have the facility to process and analyse increasingly large amounts of data, and perform complex calculations at high speed to enable high-quality and valuable research outputs in key research areas.</p>
<p><img decoding="async" class="alignnone size-large wp-image-14119" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/39465dtp-1-1024x682.jpg" alt="" width="675" height="450" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/39465dtp-1-1024x682.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/39465dtp-1-300x200.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/39465dtp-1-768x512.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/39465dtp-1.jpg 1180w" sizes="(max-width: 675px) 100vw, 675px" /></p>
<h3>Implementing a new, more powerful HPC</h3>
<p>The new HPC service is anticipated to be fully functional by <strong>January 2024</strong>.</p>
<p>The new hardware has been procured, and we are currently awaiting delivery, after which the hardware and software will be installed, tested, and all existing services migrated to the new service.</p>
<h3>What are the benefits of the new HPC service?</h3>
<ul>
<li>The new HPC service will be more widely accessible to support increasingly compute-intensive work performed in areas of the University that are not traditional users of HPC, enabling better accessibility to compute service and skills</li>
<li>In addition, there will be support staff in place to make it even easier for new users to utilise the HPC service</li>
<li>The service will also be available for experimentation beyond larger, funded projects, leading to more ambitious research and resulting funding, more high-quality outputs, increased inter-disciplinary collaboration, attracting researchers and PGRs</li>
<li>A higher capacity and more versatile on-premise compute service will allow for more ambitious calculations</li>
<li>Better data storage and connectivity</li>
<li>Higher throughput of jobs and support for more efficient use</li>
<li>Increased ambition for research on national-level HPC facilities</li>
</ul>
<h3>Want to know more or provide your feedback?</h3>
<p>Should you have any questions regarding the HPC, or would like to know more, please <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/On-Premise-Compute.aspx">visit these pages</a> or contact Colin Bannister in the first instance.</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2023/05/15/accelerating-the-universitys-research-ambitions-with-a-more-powerful-hpc-service/">Accelerating the University’s research ambitions with a more powerful HPC service</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Text Translation using Azure Cognitive Services Translator</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2023/05/09/text-translation-using-azure-cognitive-services-translator/</link>
		
		<dc:creator><![CDATA[Faraz Khan]]></dc:creator>
		<pubDate>Tue, 09 May 2023 14:59:35 +0000</pubDate>
				<category><![CDATA[Advice and Guidance]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Process Automation]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[Translation]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=14101</guid>

					<description><![CDATA[<p>In this blog, we describe how to translate text using some simple Python code and the Azure translator service. Azure Text Translator Azure Cognitive Services Translator is a cloud-based service that enables quick and accurate translation across many languages. The translator service can be used for: Language detection One-to-one or one-to-many translation Script transliteration (text ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2023/05/09/text-translation-using-azure-cognitive-services-translator/">Text Translation using Azure Cognitive Services Translator</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="171" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/translator-blog-image-300x171.png" class="attachment-medium size-medium wp-post-image" alt="Random non-distinct flags" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/translator-blog-image-300x171.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/translator-blog-image-1024x585.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/translator-blog-image-768x439.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/translator-blog-image-1536x878.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/05/translator-blog-image.png 1792w" sizes="auto, (max-width: 300px) 100vw, 300px" /><p><strong>In this blog, we describe how to translate text using some simple Python code and the Azure translator service.</strong></p>
<h2>Azure Text Translator</h2>
<p>Azure Cognitive Services Translator is a cloud-based service that enables quick and accurate translation across many languages. The translator service can be used for:</p>
<ul>
<li>Language detection</li>
<li>One-to-one or one-to-many translation</li>
<li>Script transliteration (text conversion from its native script to an alternative script)</li>
</ul>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-14102 " src="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_004042254-300x240.png" alt="azure-translator" width="368" height="294" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_004042254-300x240.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_004042254.png 580w" sizes="auto, (max-width: 368px) 100vw, 368px" /></p>
<h2>Supported Languages</h2>
<p>The Azure Translator Service (v3.0) supports 129 languages. If the source language is known, you can specify this. If the source language is unknown, Azure can detect it. For a list of supported languages (along with their language codes), please refer to the <a href="https://learn.microsoft.com/en-us/azure/cognitive-services/translator/language-support">language support page</a>.</p>
<h2>Pricing</h2>
<p>Azure provides a free tier for the translator service. The free tier allows for 2 million characters per month (about 1,500 pages of single-spaced, 12-pt font text). If you intend to translate more than this, see the <a href="https://azure.microsoft.com/en-gb/pricing/details/cognitive-services/translator/">pricing page</a> (UK South region).</p>
<h2>How to use the Translator API</h2>
<h3>Create The Translator Resource</h3>
<p>You will first need an Azure account. A free account can be created <a href="https://azure.microsoft.com/en-gb/free/search/?ef_id=_k_CjwKCAjwuqiiBhBtEiwATgvixPVDOmWFthkExkVVqH1DKMOb30UwAYLJf9qS-oaK8yg8FbIexpo2iBoCczcQAvD_BwE_k_&amp;OCID=AIDcmm3bvqzxp1_SEM__k_CjwKCAjwuqiiBhBtEiwATgvixPVDOmWFthkExkVVqH1DKMOb30UwAYLJf9qS-oaK8yg8FbIexpo2iBoCczcQAvD_BwE_k_&amp;gad=1&amp;gclid=CjwKCAjwuqiiBhBtEiwATgvixPVDOmWFthkExkVVqH1DKMOb30UwAYLJf9qS-oaK8yg8FbIexpo2iBoCczcQAvD_BwE">here</a>. After creating your account, a translator resource must be provisioned:</p>
<ul>
<li>Login to the <a href="https://portal.azure.com/#home">Azure Portal</a>.</li>
<li>Create your translator service resource by selecting it from the menu at this link: <a href="https://portal.azure.com/#create/Microsoft.CognitiveServicesTextTranslation">Create Translator</a>.</li>
<li>Select the region where you want the resource to be located. The region dictates where your data will be stored and processed. If you want your data to be stored and processed in the UK, then select either &#8220;UK South&#8221; or &#8220;UK West&#8221;.</li>
<li>For pricing, select either the Free (F0) pricing tier or the pay-as-you-go (S0) tier.</li>
<li>Click Review + Create</li>
</ul>
<h3>Get Your Authentication Keys and Endpoint</h3>
<p>To use the API, you will need the authentication keys and endpoint of the resource you have just generated.</p>
<ul>
<li>After the resource is generated, click &#8220;Go to resource&#8221;.</li>
<li>In the left column, select &#8220;Keys and Management&#8221;.</li>
<li>This will open the following page:</li>
</ul>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-14103 size-large" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_010029644-1024x682.png" alt="translator-keys-endpoint" width="675" height="450" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_010029644-1024x682.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_010029644-300x200.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_010029644-768x512.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_010029644-1536x1024.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2023/04/image_2023-04-27_010029644-2048x1365.png 2048w" sizes="auto, (max-width: 675px) 100vw, 675px" /></p>
<p>Copy Key 1 or Key 2 as well as the endpoint URL. These will be used in the python code (see below) to make the translation request.</p>
<h3>Python Setup and Code</h3>
<p>Once you have your keys and endpoint, please <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Text-Translation-using-Azure.aspx">visit this SharePoint page</a> for instructions on setting up your python environment and for the copy-and-paste code. This is the final step to using the service and translating your texts.</p>
<p>If you are interested in talking about this process, please feel free to contact <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Introduction-to-Digital-Research.aspx">one of the team</a>.</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2023/05/09/text-translation-using-azure-cognitive-services-translator/">Text Translation using Azure Cognitive Services Translator</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Using Julia on the HPC</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2022/09/30/using-julia-on-the-hpc/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Fri, 30 Sep 2022 11:29:29 +0000</pubDate>
				<category><![CDATA[Advice and Guidance]]></category>
		<category><![CDATA[Data Analytics]]></category>
		<category><![CDATA[High Performance Computing]]></category>
		<category><![CDATA[HPC]]></category>
		<category><![CDATA[Julia]]></category>
		<category><![CDATA[Programming]]></category>
		<category><![CDATA[Python]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=14049</guid>

					<description><![CDATA[<p>In this blog, we explore the speed and efficiency of using the programming language Julia on the University&#8217;s high-performance computer. This blog has been guest-authored by Jamie Mair, a PhD researcher in the School of Physics and Astronomy. The repository for the code given below, which was presented at this year&#8217;s annual UoN HPC conference, ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/09/30/using-julia-on-the-hpc/">Using Julia on the HPC</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="188" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia-logo-color-300x188.png" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia-logo-color-300x188.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia-logo-color.png 320w" sizes="auto, (max-width: 300px) 100vw, 300px" /><p><strong>In this blog, we explore the speed and efficiency of using the programming language Julia on the University&#8217;s high-performance computer.</strong></p>
<p><strong>This blog has been guest-authored by Jamie Mair, a PhD researcher in the School of Physics and Astronomy.</strong></p>
<p>The repository for the code given below, which was presented at this year&#8217;s annual UoN HPC conference, can be found <a href="https://github.com/JamieMair/julia-for-research-with-hpc" target="_blank" rel="noopener">here</a>.</p>
<p>For more information on the University&#8217;s high-performance computer, please see <a href="https://www.nottingham.ac.uk/dts/researcher/compute-services/high-performance-computing.aspx" target="_blank" rel="noopener">here</a>.</p>
<h2>What is Julia?</h2>
<p><a href="https://julialang.org/" target="_blank" rel="noopener">Julia</a> is a modern, general purpose, programming language. It was designed for high-performance scientific programming. According to a <a href="https://julialang.org/blog/2012/02/why-we-created-julia/" target="_blank" rel="noopener">blog post</a> written by the creators, Julia was borne out of the desire for a programming language which is as fast as C, yet easy to use like Python, MATLAB and R. Personally, I use Julia because it is incredibly easy to prototype new code for my research, while being flexible enough to execute my code across my home machine, my GPU or an entire cluster. And all of this with minimum effort on my part!</p>
<p>In this blog, I discuss what’s possible in Julia for very little investment. You will see how to write reusable code and take advantage of every level of parallelism, from the hardware parallelism of SIMD, to executing native Julia code on the GPU and scaling it across a cluster. The ease of use and flexibility of this parallelism is unmatched and provides a very low bar to entry.</p>
<h2>Julia is fast</h2>
<p>Julia has many advantages when compared to other languages. However, the main claim that will interest our HPC community is speed. Julia claims to be as fast as C, so let’s put that to the test.</p>
<p>I will use the example of simulating a Monte-Carlo process, specifically a <a href="https://en.wikipedia.org/wiki/Random_walk" target="_blank" rel="noopener">random walk</a>. This process is simple, but the basics are used in a wide variety of fields and are common in HPC workloads.</p>
<p>The code for running a random walk is:</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-14055" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_1-300x166.png" alt="" width="300" height="166" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_1-300x166.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_1-1024x566.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_1-768x424.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_1.png 1102w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p>This function performs n random walks, all of length T. We can write some code in C++ that is roughly equivalent, to see empirical differences in performance.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14056" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_2-300x160.png" alt="" width="482" height="257" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_2-300x160.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_2-1024x545.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_2-768x409.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_2-1536x818.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_2.png 1925w" sizes="auto, (max-width: 482px) 100vw, 482px" /></p>
<p>This implementation in C++ is not optimal at all (I am no C++ programmer), but is indicative of what a typical PhD student might write for simulations. We find that the Julia version is between 4 and 7 times faster (depending on the size of n).</p>
<p><img loading="lazy" decoding="async" class="wp-image-14092 alignleft" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_cpp-300x200.png" alt="" width="503" height="335" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_cpp-300x200.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_cpp.png 600w" sizes="auto, (max-width: 503px) 100vw, 503px" /></p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>&nbsp;</p>
<p>While Julia has the advantage here, I don’t think that Julia is inherently faster than C/C++. A better C/C++ developer could make the C/C++ code perform similar to the Julia code. The main point I want to make is that you can achieve C/C++ levels of performance for minimal effort, while keeping your code simple and trusting Julia’s “Just in Time” compiler (the standard LLVM compiler) to produce fast machine code. Not all code will run quickly, but there are <a href="https://docs.julialang.org/en/v1/manual/performance-tips/" target="_blank" rel="noopener">performance tips</a> that will let you achieve these C-like speeds.</p>
<h2>Multi-threading in Julia</h2>
<p>Almost every modern CPU has multiple cores, thus providing additional compute power. Julia provides lightweight multi-threading support natively. The language provides many ways of parallelising your code, but here we’ll just use a macro to make this process easy. A macro is code that writes other code. For executing a loop in parallel, we just add an @threads macro from the standard Threads library:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14059" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_3-300x136.png" alt="" width="369" height="167" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_3-300x136.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_3-1024x465.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_3-768x348.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_3.png 1373w" sizes="auto, (max-width: 369px) 100vw, 369px" /></p>
<p class="FirstParagraph"><span lang="EN-US">This is essentially the same source code as above, with just a simple change in front of the for-loop. We can add this to the performance graph:</span></p>
<p><img loading="lazy" decoding="async" class="alignnone  wp-image-14094" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_threaded-300x200.png" alt="" width="461" height="307" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_threaded-300x200.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_threaded.png 600w" sizes="auto, (max-width: 461px) 100vw, 461px" /></p>
<p>&nbsp;</p>
<p>We can see that this is pretty good performance out of the box. Unfortunately, we can see that threading doesn’t reach the theoretical maximum speedup, though there are techniques to <a href="https://github.com/JamieMair/julia-for-research-with-hpc/blob/00681690fb4dd4a5b83c2563616658be594a916d/main.jmd#L163" target="_blank" rel="noopener">remedy this</a>.</p>
<h2>Moving to the GPU</h2>
<p>The modern GPU excels at array-based operations at scale, such as standard BLAS operations (matrix multiplications, etc.), which make up the backbone of a lot of numerical and scientific computing. Writing code for a GPU is notoriously difficult (custom kernels in C present a high barrier to entry for beginners).</p>
<p>The good news is that Julia has a successful package, called CUDA.jl, which allows you to write native Julia code that will compile directly to GPU code. Let’s take it for a spin!</p>
<p>Up to now, we have focused on using the standard Julia libraries, but there is a rich ecosystem of packages available. Adding a package is as simple as running the following code:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14063" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_4-300x107.png" alt="" width="132" height="47" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_4-300x107.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_4.png 539w" sizes="auto, (max-width: 132px) 100vw, 132px" /></p>
<p>This uses Julia’s built-in package manager to download the CUDA.jl package. What makes this process painless is that when you use CUDA for the first time, it will download the CUDA libraries you need to get things working instead of having to install CUDA manually (as you need to do in Python libraries like Tensorflow and PyTorch).</p>
<p>GPUs are very different devices to CPUs and have a slightly different programming model. The main difference is that scalar indexing of an array has really poor performance on a GPU, and so the CUDA.jl package errors when you try to do this. Instead, we can use Julia’s excellent “broadcasting” notation, which allows us to write concise “vectorised” code. This notation also allows us to write <em>non-allocating</em> code, which means that we can reuse memory. This can make a huge performance difference. Here’s the same random walk code, rewritten to use array-based notation and avoiding allocating memory:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14065" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_5-300x160.png" alt="" width="485" height="259" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_5-300x160.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_5-1024x547.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_5-768x410.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_5-1536x820.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_5.png 1933w" sizes="auto, (max-width: 485px) 100vw, 485px" /></p>
<p><em>Please note that the use of the </em><em>!</em><em> at the end of the function name is just notation that tells us that the function mutates the input arguments. </em></p>
<p>This code runs fine on the CPU with arrays, and is very similar to how someone would write this in MATLAB or numpy (in Python). We can benchmark this function and add it to the graph:</p>
<p><img loading="lazy" decoding="async" class="alignnone  wp-image-14095" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_array-300x200.png" alt="" width="487" height="324" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_array-300x200.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_array.png 600w" sizes="auto, (max-width: 487px) 100vw, 487px" /></p>
<p class="FirstParagraph"><span lang="EN-US">Notice that this has slightly better performance than the serial implementation, as it can make use of hardware SIMD instructions. What is really powerful about this approach is that we can directly re-use this code to run the code on the GPU:</span></p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14068" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_6-300x66.png" alt="" width="532" height="117" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_6-300x66.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_6-1024x224.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_6-768x168.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_6-1536x336.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_6.png 1875w" sizes="auto, (max-width: 532px) 100vw, 532px" /></p>
<p>There are no changes to our code!</p>
<p>The only thing we changed was the type of our array. Since Julia has a Just-in-Time compiler, which specialises on the type of the input arguments, it knows to perform all of the array-based operations on x on the GPU. We can see the performance below:</p>
<p><img loading="lazy" decoding="async" class="alignnone  wp-image-14096" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_gpu-300x200.png" alt="" width="522" height="348" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_gpu-300x200.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_gpu.png 600w" sizes="auto, (max-width: 522px) 100vw, 522px" /></p>
<p>Here we can see that it is only really worth using the GPU for many elements in our array. If we require millions of calculations, the GPU can be around 100x faster than the CPU! We were able to get this huge increase in performance by only changing the input type to our function, and were able to reuse our original code.</p>
<p>Here, we have really only scratched the surface of GPU programming, as you can always move onto writing your own kernels from scratch. For a more detailed review, check out the CUDA.jl <a href="https://cuda.juliagpu.org/stable/" target="_blank" rel="noopener">documentation</a>.</p>
<h2>Scaling up to cluster</h2>
<p>This wouldn’t be an HPC blog without talking about using a cluster. While it is certainly possible to use a single node with multi-threading, Julia also has native support for multiprocessing to scale across multiple nodes. Julia has native support for multiprocessing (with the Distributed.jl standard library that comes with Julia), and support for schedulers like SLURM with ClusterManagers.jl.</p>
<p>In order to connect all of the nodes, all we need to do is run the following code:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14070" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_7-300x69.png" alt="" width="387" height="89" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_7-300x69.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_7-1024x235.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_7-768x176.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_7.png 1426w" sizes="auto, (max-width: 387px) 100vw, 387px" /></p>
<p>We could use the parallel primitives as described in the <a href="https://docs.julialang.org/en/v1/stdlib/Distributed/">Distributed.jl documentation</a>. But let’s reuse some of our previous code and take the array based programming model. There is a package called DistributedArrays.jl that gives us the DArray type. As the name suggests, this distributes the memory of an array in blocks to each process:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14071" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_graph_5-300x50.png" alt="" width="630" height="105" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_graph_5-300x50.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_graph_5.png 624w" sizes="auto, (max-width: 630px) 100vw, 630px" /></p>
<p>We would hope that we could copy the method of implementing the GPU extension:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14072" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_8-300x66.png" alt="" width="532" height="117" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_8-300x66.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_8-1024x225.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_8-768x169.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_8-1536x337.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_8.png 1877w" sizes="auto, (max-width: 532px) 100vw, 532px" /></p>
<p>Unfortunately, the above code does not work on its own, but needs to implement the Random.randn! function for the DArray type. But since our function is easily mapped to each block in the DArray, we can just write a specific implementation of our random_walk! for this type, which we can do with a little help from the DistributedArrays.jl <a href="https://juliaparallel.org/DistributedArrays.jl/stable/#SPMD-Context" target="_blank" rel="noopener">documentation</a>:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-14073" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_9-300x85.png" alt="" width="455" height="129" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_9-300x85.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_9-1024x290.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_9-768x218.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_9-1536x436.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/julia_code_9.png 1555w" sizes="auto, (max-width: 455px) 100vw, 455px" /></p>
<p>We have to use the @everywhere macro to load the function on each of the workers. Usually, you can put all the code that each worker needs in a separate file and just call @everywhere once.</p>
<p>This code just tells Julia how to call our original random_walk! function on each block of the array, reusing our code from before. The performance (for 8 workers) is added to our graph:</p>
<p><img loading="lazy" decoding="async" class="alignnone  wp-image-14097" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_dist-300x200.png" alt="" width="542" height="361" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_dist-300x200.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/09/monte_carlo_dist.png 600w" sizes="auto, (max-width: 542px) 100vw, 542px" /></p>
<p><span lang="EN-US">Multiprocessing always has a much higher overhead than multi-threading, but we can scale to potentially millions of processes, and use memory and resources from hundreds of different computers.</span></p>
<h2>What&#8217;s next?</h2>
<p>Hopefully, in this blog, I have shown you the potential of using Julia to enable a range of parallelism, and convince you that there is a low barrier to entry to get started! All of the code in this blog post can be found in the blog folder in <a href="https://github.com/JamieMair/julia-for-research-with-hpc" target="_blank" rel="noopener">this GitHub repository</a>.</p>
<p>If you want to learn more about Julia, you can visit the <a href="https://julialang.org/" target="_blank" rel="noopener">website</a> and if you want to learn the language, start with the <a href="https://docs.julialang.org/en/v1/manual/getting-started/" target="_blank" rel="noopener">Julia manual</a>.</p>
<p>In this blog, I haven’t gone through many of the other reasons that people use Julia, such as <a href="https://www.youtube.com/watch?v=kc9HwsxE1OY" target="_blank" rel="noopener">multiple dispatch</a> or state-of-the-art support for <a href="https://docs.juliahub.com/DifferentialEquations/UQdwS/6.15.0/" target="_blank" rel="noopener">differential equations</a> and <a href="https://sciml.ai/" target="_blank" rel="noopener">scientific machine learning</a>. If there are packages that you need and are only available in Python, there is also a package called <a href="https://www.juliapackages.com/p/pycall" target="_blank" rel="noopener">PyCall.jl</a> which will let you let you run and write Python code directly from Julia, giving you access to the entire Python ecosystem.</p>
<p>Finally, if you want to ask questions, there is a very active community on the <a href="https://discourse.julialang.org/" target="_blank" rel="noopener">Julia discourse</a>.</p>
<p>For information and getting connected to the University of Nottingham HPC, <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/On-Premise-Compute.aspx" target="_blank" rel="noopener">please see here</a>.</p>
<p>Happy programming!</p>
<p>&nbsp;</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/09/30/using-julia-on-the-hpc/">Using Julia on the HPC</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Introducing Trusted Research Environments</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2022/05/31/introducing-trusted-research-environments/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Tue, 31 May 2022 08:03:55 +0000</pubDate>
				<category><![CDATA[Advice and Guidance]]></category>
		<category><![CDATA[Cloud computing]]></category>
		<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Trusted Research Environment]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=14036</guid>

					<description><![CDATA[<p>The UoN Trusted Research Environment (TRE) Service is now live. The University of Nottingham’s pioneering Trusted Research Environment (TRE) platform is now live, enhancing the University’s research capabilities. Hosted by Digital and Technology Services (DTS) in Microsoft Azure, the secure data platform is available to researchers, partnered institutions and trusted organisations working with highly sensitive ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/05/31/introducing-trusted-research-environments/">Introducing Trusted Research Environments</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="158" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/TRE-JPEG-300x158.jpg" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/TRE-JPEG-300x158.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/TRE-JPEG-1024x538.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/TRE-JPEG-768x403.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/TRE-JPEG.jpg 1200w" sizes="auto, (max-width: 300px) 100vw, 300px" /><h3>The UoN Trusted Research Environment (TRE) Service is now live.</h3>
<h4>The University of Nottingham’s pioneering Trusted Research Environment (TRE) platform is now live, enhancing the University’s research capabilities.</h4>
<p>Hosted by Digital and Technology Services (DTS) in Microsoft Azure, the secure data platform is available to researchers, partnered institutions and trusted organisations working with highly sensitive data.</p>
<h5><strong>A secure digital infrastructure</strong></h5>
<p>Access to the University’s TRE platform extends across all Faculties. The platform provides researchers with the tools and capabilities needed to access sensitive data in a secure environment.</p>
<p>A TRE provides a computing environment (with controlled access to data in that environment) for projects that require a high level of data security and the ability to perform audits.</p>
<p>The centralised platform provides users with the tools needed to access, process, analyse and utilise sensitive data in a secure environment.</p>
<h5><strong>Pioneers in research </strong></h5>
<p>The platform&#8217;s extensive capabilities and digital infrastructure will enhance research opportunities, empowering the University’s academic community to continue to produce world-changing research through effective collaboration and invention.</p>
<p>For more information regarding the TRE platform, including the application process, its suitability for your work, platform security and/or associated costs, please visit the <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Trusted-Research-Environment.aspx?OR=Teams-HL&amp;CT=1652096351595&amp;params=eyJBcHBOYW1lIjoiVGVhbXMtRGVza3RvcCIsIkFwcFZlcnNpb24iOiIyNy8yMjAzMDcwMTYxMCJ9" target="_blank" rel="noopener">Digital Research SharePoint site</a>.</p>
<p>&nbsp;</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/05/31/introducing-trusted-research-environments/">Introducing Trusted Research Environments</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>HoloLens 2 and research</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2022/05/27/hololens-2-and-research/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Fri, 27 May 2022 06:59:14 +0000</pubDate>
				<category><![CDATA[Advice and Guidance]]></category>
		<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Digital Solutions Hub]]></category>
		<category><![CDATA[HoloLens]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=14017</guid>

					<description><![CDATA[<p>This week, we trialled HoloLens 2 technology with colleagues from the School of Life Sciences, the Biodiscovery Institute, the School of Education, the Nanoscale &#38; Microscale Research Centre, and the Hounsfield Facility.  The workshop was led by InterReality Labs, with support from Microsoft. The HoloLens 2 is an untethered, wearable device that can project holographic ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/05/27/hololens-2-and-research/">HoloLens 2 and research</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="225" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-300x225.jpg" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-300x225.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-1024x768.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-768x576.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-1536x1152.jpg 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3.jpg 2016w" sizes="auto, (max-width: 300px) 100vw, 300px" /><h4><strong>This week, we trialled HoloLens 2 technology with colleagues from the School of Life Sciences, the Biodiscovery Institute, the School of Education, the Nanoscale &amp; Microscale Research Centre, and the Hounsfield Facility. </strong></h4>
<p>The workshop was led by <a href="https://interrealitylabs.com/" target="_blank" rel="noopener">InterReality Labs</a>, with support from Microsoft.</p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-14021 size-full" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-4.jpg" alt="" width="2016" height="834" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-4.jpg 2016w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-4-300x124.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-4-1024x424.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-4-768x318.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-4-1536x635.jpg 1536w" sizes="auto, (max-width: 2016px) 100vw, 2016px" /></p>
<p>The <a href="https://www.microsoft.com/en-us/hololens" target="_blank" rel="noopener">HoloLens 2</a> is an untethered, wearable device that can project holographic models into the wearer&#8217;s real-world environment. This is known as augmented or mixed reality. It allows users to see digital 3D models or other artefacts before their eyes, embedded in a real-world physical environment. There are many applications. Our workshop focussed on research, collaboration, and facilities.</p>
<p><strong>Research</strong>: you can use the HoloLens glasses to view 3D models in three dimensions. This is a fundamentally different experience from seeing such models on a two-dimensional screen. Users can walk around the models, view them from all angles, even walk inside them. Models can be rotated, scaled, situated in a real environment, all with a few simple hand gestures. We tried it out on a life-size model of a human skeleton and a train carriage, but any 3D model can be loaded into the device.</p>
<p><strong>Collaboration: </strong>the HoloLens 2 shines when it comes to collaboration. The view through the glasses can be projected on a large screen and thus shared with other room occupants. Beyond that, it is possible to join a Teams meeting, whereby the call participants not only see the view through the HoloLens, but can also annotate this in real time. Sharing 3D models in a three-dimensional environment becomes possible with partners located anywhere in the world.</p>
<p><strong>Facilities: </strong>another major application of the HoloLens is remote training. Users of complex research equipment can be guided from afar and in real time. Alternatively, expert technicians can demonstrate how to use equipment from the perspective of the user (and record this, if required). No more 100-page user documents! You can also use the HoloLens to create tours, and there is a package for making interactive mixed-reality guides.</p>
<p><img loading="lazy" decoding="async" class="wp-image-14018 size-large aligncenter" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-1024x768.jpg" alt="" width="675" height="506" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-1024x768.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-300x225.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-768x576.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3-1536x1152.jpg 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/05/HoloLens-3.jpg 2016w" sizes="auto, (max-width: 675px) 100vw, 675px" /></p>
<p>Colleagues from across the University will now be testing the HoloLens in their respective research groups and facilities.</p>
<p>If you would like to try the technology or borrow a HoloLens 2, please <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Introduction-to-Digital-Research.aspx" target="_blank" rel="noopener">get in touch with a member of the team</a>!</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/05/27/hololens-2-and-research/">HoloLens 2 and research</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Approaches to tracking impact</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2022/04/25/approaches-to-tracking-impact/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Mon, 25 Apr 2022 14:28:50 +0000</pubDate>
				<category><![CDATA[Advice and Guidance]]></category>
		<category><![CDATA[Impact]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=13902</guid>

					<description><![CDATA[<p>In this blog, we consider approaches to tracking research impact, with a focus on digital tools. We will outline reasons for gathering evidence of impact, what tools you might need to do this, and how digital approaches can help. In short: how can you gather material to show the positive societal change that your work ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/04/25/approaches-to-tracking-impact/">Approaches to tracking impact</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="300" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/planet-earth-300x300.jpg" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/planet-earth-300x300.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/planet-earth-1024x1024.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/planet-earth-150x150.jpg 150w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/planet-earth-768x768.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/planet-earth.jpg 1280w" sizes="auto, (max-width: 300px) 100vw, 300px" /><p><strong>In this blog, we consider approaches to tracking research impact, with a focus on digital tools.</strong></p>
<p>We will outline reasons for gathering evidence of impact, what tools you might need to do this, and how digital approaches can help.</p>
<p>In short: how can you gather material to show the positive societal change that your work has enabled?</p>
<p>&nbsp;</p>
<h3>Why gather evidence for impact?</h3>
<p><strong>To demonstrate change</strong>: show to others that things are heading in the desired direction, satisfy stakeholders who are invested in your work, show that promises have been kept and increase your accountability, boost your own confidence.</p>
<p><strong>To convince</strong><strong>: </strong>build trust among potential partners, increase support for your work, make the case for scaling up, reduce scepticism.</p>
<p><strong>To motivate: </strong>spark the curiosity of others, provoke questions and critical thought, inspire others to act, astound your audiences.</p>
<p>&nbsp;</p>
<h3>What do you need in your toolbox?</h3>
<p><img loading="lazy" decoding="async" class="alignright wp-image-13911 size-medium" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Tools-300x225.jpg" alt="" width="300" height="225" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Tools-300x225.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Tools.jpg 640w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p><strong>To demonstrate change</strong>: clear and credible baselines, testimonies from people affected by your work, user or access data, financial models and information, explorable data dashboards.</p>
<p><strong>To convince</strong><strong>: </strong>impressive comparisons (before and after your intervention), digestible data visualisations, a trusted messenger, a message tailored to your audience.</p>
<p><strong>To motivate: </strong>inspiring stories, wide-reaching messages, equip your audience to take action.</p>
<p>&nbsp;</p>
<h3>How can digital approaches help (and the CART principles)?</h3>
<p>Digital methods for gathering evidence of impact can help:</p>
<p><strong>Speed</strong>: gather evidence quickly</p>
<p><strong>Reach: </strong>gather evidence from a large group of stakeholders</p>
<p><strong>Automation: </strong>gather evidence with less effort</p>
<p><strong>Variety</strong>: gather evidence of different flavours</p>
<p><strong>Cost: </strong>gather evidence cheaply</p>
<p>Digital tools can help you gather lots of evidence quickly, at low cost, and without much effort. We go into detail about some of these digital tools below, looking at their pros and cons.</p>
<p>However, just because we can do something, that does not mean we should. This is where the &#8216;<a href="https://prevention-collaborative.org/wp-content/uploads/2021/08/IPA_2016_Goldilocks-Toolkit-Impact-Measurement-with-CART-Principles_1.pdf" target="_blank" rel="noopener">CART principles</a>&#8216; can help. CART stands for Credible, Actionable, Responsible, Transportable. In short:</p>
<p><em>Credible</em>: privilege high-quality data over mass data collection; avoid bias in your data collection, for example by using randomised controlled trials; ensure that your counterfactuals (what would have happened if I had not intervened) are believable and supported with evidence.</p>
<p><em>Actionable</em>: commit to collecting data you can actually use; collect data that will be useful also to others; make sure your data collection matches the needs of your stakeholders.</p>
<p><em>Responsible</em>: ensure that the benefit of tracking impact outweighs the cost; ensure your data collection methods respect ethical guidelines.</p>
<p><em>Transportable</em>: collect data that can generate knowledge for others; gather data that demonstrates why (or why not) your intervention could work in different contexts.</p>
<p>Following the CART principles when designing and carrying out your evidence collection will avoid waste and help maximise the value of your efforts.</p>
<p>&nbsp;</p>
<h3>Some specific approaches (and things to watch out for)</h3>
<p>Build impact evidence-gathering into your project from the outset. In that way, you can plan the optimal time(s) for gathering evidence, establish ways to diversify the sources of your evidence, and even build the evidence-gathering process into your output (for example, ensuring from the very start of your project that you are able to see how many people download your toolkit, rather than waiting until after the toolkit is published to think about this).</p>
<p>The table below outlines some common tools for measuring impact, what they are good for, as well as some things to watch out for (you can open the image in a new tab for a larger view):</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-13919 size-full" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Impact-tools-table-scaled.jpg" alt="" width="2560" height="973" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Impact-tools-table-scaled.jpg 2560w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Impact-tools-table-300x114.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Impact-tools-table-1024x389.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Impact-tools-table-768x292.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Impact-tools-table-1536x584.jpg 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Impact-tools-table-2048x778.jpg 2048w" sizes="auto, (max-width: 2560px) 100vw, 2560px" /></p>
<p>Take <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Surveys.aspx" target="_blank" rel="noopener">online surveys</a> as an example. These are great for reaching a wide audience and collecting different types of information about the impact of your intervention. The cost is low (often free), surveys are flexible, and they help reduce biases that can occur in face-to-face settings. That said, surveys need careful designing to get the information you need, they often provide merely a snapshot of your impact at one moment of time (rather than regularly tracked impact), and they can be problematic in cases where your audience has low literacy levels.</p>
<p>Or how about tracking the number of people who have downloaded your resources? This is a great way of measuring the reach of your outputs over time. The process can be automated and it is free. However, this sort of tracking will not tell you <em>how </em>your resources are being used. So while it provides good quantitative data, it is less convincing when trying to construct a powerful narrative about the positive change you have enabled.</p>
<p>All the other tools in the table have their advantages, their challenges, and their optimal uses. The upshot is that one of the best approaches to impact evidence-gathering is to employ a combination of tools. This will allow you to gather a mix of quantitative and qualitative evidence and reduce the risk that key (i.e., persuasive) information is missed.</p>
<p>&nbsp;</p>
<h3>Last thoughts: planning you impact evidence-gathering</h3>
<p>Planning your impact evidence-gathering will save you time in the long-run and set you up for demonstrable success.</p>
<p>Think early about what evidence you hope to gather (testimonies, cost savings, geographical reach, downloads of documents, etc.). Make sure you have good baselines, because you will want to be able to show that change has truly taken place and that this is linked to your intervention. Then think about the different audiences for your evidence, and what you might need to adapt to suit their needs &#8211; some audiences may prefer to see hard figures, whereas others will be persuaded by emotive stories. Finally, think about how you are going to collect the evidence (and how often), potentially using a mix of the tools outlined in the table above. If you can do all this at the start of your work, you will be in a fantastic place to demonstrate your impact to various audiences, when the time comes.</p>
<p>&nbsp;</p>
<h3>Further reading and inspiration</h3>
<p>There is a wealth of resources on measuring and demonstrating impact effectively. Here is a selection:</p>
<p>A. Ebrahim (2019) <i>Measuring Social Change. Performance and Accountability in a Complex World</i>. Stanford &#8211; how to track performance toward worthy goals, with case studies from the non-profit sector</p>
<p>M.K. Gugerty &amp; D. Karlan (2018) <i>The Goldilocks Challenge. Right-Fit Evidence for the Social Sector</i>. Oxford &#8211; using the CART principles to assess impact evidence-gathering and build better systems for tracking positive change</p>
<p>The <i>Stanford Social Innovation Review </i>(<a href="https://ssir.org/" target="_blank" rel="noopener">ssir.org</a>) &#8211; for the latest thinking and innovations around impact</p>
<p>thinknpc.org (<a href="https://www.thinknpc.org/themes/measure-and-manage-impact/" target="_blank" rel="noopener">with various toolkits</a>) &#8211; New Philanthropy Capital thinktank for the charity sector that aims at maximising positive social impact</p>
<p>Examples of socially conscious data visualisation and data dashboards (<a href="https://periscopic.com/" target="_blank" rel="noopener">Periscopic</a>)</p>
<p>&nbsp;</p>
<p>If you have questions about any of the material discussed above, please don&#8217;t hesitate to <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Introduction-to-Digital-Research.aspx" target="_blank" rel="noopener">contact a member of the team</a>. We&#8217;d be happy to advise.</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/04/25/approaches-to-tracking-impact/">Approaches to tracking impact</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Archiving a website</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2022/04/22/archiving-a-website/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Fri, 22 Apr 2022 10:02:44 +0000</pubDate>
				<category><![CDATA[Advice and Guidance]]></category>
		<category><![CDATA[Research Data Management]]></category>
		<category><![CDATA[Websites]]></category>
		<category><![CDATA[archiving]]></category>
		<category><![CDATA[websites]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=13838</guid>

					<description><![CDATA[<p>In this guide, we show you how to download and package a website for archiving. If you have produced a website for your research project, you may wish to archive it (or be obliged by your funder to do so). Here, we show you how to do two things: (a) convert a website to a ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/04/22/archiving-a-website/">Archiving a website</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="299" height="179" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Computer-and-book_cropped_2-e1650552298880.jpg" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" /><h4><strong>In this guide, we show you how to download and package a website for archiving.</strong></h4>
<p>If you have produced a website for your research project, you may wish to archive it (or be obliged by your funder to do so).</p>
<p>Here, we show you how to do two things:</p>
<p>(a) convert a website to a static version for continued hosting on e.g. github pages</p>
<p>(b) download and package a website in <a href="https://www.loc.gov/preservation/digital/formats/fdd/fdd000236.shtml#:~:text=The%20WARC%20format%20is%20a,from%20the%20World%20Wide%20Web." target="_blank" rel="noopener">WARC format</a>, which is an international standard for the archiving of digital assets.</p>
<p><img loading="lazy" decoding="async" class="size-full wp-image-13845 aligncenter" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/Computer-and-book_cropped-e1650532278664.jpg" alt="" width="299" height="261" /></p>
<p>We will use a a free tool called WGET. We will focus on Windows users, but if you have a Mac, an Internet search will reveal how to install WGET (for Linux users, WGET comes pre-installed). For Windows users, you can download <a href="https://eternallybored.org/misc/wget/" target="_blank" rel="noopener">the latest version of WGET here</a>.</p>
<p>Download the EXE file (it&#8217;s only about 5MB), but don&#8217;t open it.</p>
<h6>(To find out if your computer is 32-bit or 64-bit, type &#8216;System Information&#8217; in the Windows search bar and look at &#8216;System Type&#8217;.)</h6>
<p>&nbsp;</p>
<p>Now, in the Windows search bar, type &#8216;cmd&#8217; and open the Command Prompt.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-13889" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/find_command_prompt-282x300.jpg" alt="" width="282" height="300" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/find_command_prompt-282x300.jpg 282w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/find_command_prompt-964x1024.jpg 964w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/find_command_prompt-768x816.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/find_command_prompt.jpg 1287w" sizes="auto, (max-width: 282px) 100vw, 282px" /></p>
<p>When you have opened the Command Prompt, type &#8216;path&#8217; and hit ENTER. You will see something like this:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-13850 size-full" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/cmd_window.jpg" alt="" width="1832" height="494" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/cmd_window.jpg 1832w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/cmd_window-300x81.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/cmd_window-1024x276.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/cmd_window-768x207.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/cmd_window-1536x414.jpg 1536w" sizes="auto, (max-width: 1832px) 100vw, 1832px" /></p>
<p>The location highlighted above in red is where you need to move the WGET.exe file that you have just downloaded. Move the file there now, then restart the Command Prompt. (Please note that you will need admin rights on your computer to do this &#8211; if you are working on a University computer without admin rights, please request these temporarily via the UoN IT <a href="https://www.nottingham.ac.uk/dts/help/self-service-portal/self-service-portal.aspx" target="_blank" rel="noopener">Self-Service Portal</a>.)</p>
<p>To check that WGET is working, type:</p>
<pre>wget -h</pre>
<p>Hit ENTER and you should see something like this:</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-13851 size-large" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/wget_cmd-1024x570.jpg" alt="" width="675" height="376" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/wget_cmd-1024x570.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/wget_cmd-300x167.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/wget_cmd-768x427.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/wget_cmd-1536x855.jpg 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/wget_cmd.jpg 1835w" sizes="auto, (max-width: 675px) 100vw, 675px" /></p>
<p>It is helpful to create a new directory (a new folder) for your downloaded website(s). For our purposes, we will call this new folder &#8216;archivedSite&#8217;. In the Command Prompt, type the following and hit ENTER to create the folder:</p>
<pre>md archivedSite</pre>
<p>Then type the following and hit ENTER:</p>
<pre>cd archivedSite</pre>
<h4><strong>Now we are ready to download your website(s) into the new folder. You should only do this for websites where you own the content.</strong></h4>
<p>We&#8217;ll use the following website as an example: <a href="https://www.resilient-decarbonised-energy-dtc.ac.uk/" target="_blank" rel="noopener">https://www.resilient-decarbonised-energy-dtc.ac.uk/</a></p>
<p><img loading="lazy" decoding="async" class="aligncenter wp-image-13878 size-large" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/DTP_website_screenshot_cropped-1024x596.jpg" alt="" width="675" height="393" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/DTP_website_screenshot_cropped-1024x596.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/DTP_website_screenshot_cropped-300x175.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/DTP_website_screenshot_cropped-768x447.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2022/04/DTP_website_screenshot_cropped.jpg 1521w" sizes="auto, (max-width: 675px) 100vw, 675px" /></p>
<h3>Static version of the website</h3>
<p>To get a static version of the website, type (or copy/paste) the following into the Command Prompt (all in one line) and hit ENTER:</p>
<pre>wget --mirror --recursive --convert-links --adjust-extension --random-wait</pre>
<pre>--page-requisites --local-encoding=UTF-8 --no-parent -R "*.php, *.xml" https://www.resilient-decarbonised-energy-dtc.ac.uk</pre>
<p>This will download a folder with HTML + CSS files for every page of the website, as well as accompanying media, such as images. All the hyperlinks in the website will work, including those that point externally to other websites. The downloaded files could be archived or displayed as a live static website on e.g. <a href="https://pages.github.com/" target="_blank" rel="noopener">github pages</a>, for free.</p>
<p>Nb. the download can take several minutes, depending on the size of the website.</p>
<p><strong>WARNING:  static websites won&#8217;t handle interactive elements (such as fancy </strong><strong>drop-down menus, submission forms, and other sophisticated plug-in functionality) nor can they deal with connections to underlying databases and search/filter functionality. If possible, you should remove these elements from your website before downloading it.</strong></p>
<p>&nbsp;</p>
<h3>WARC version of the website</h3>
<p>To get a WARC version of the website for archiving, type (or copy/paste) the following into the Command Prompt (all in one line) and hit ENTER:</p>
<pre>wget --mirror --recursive --convert-links --adjust-extension</pre>
<pre>--random-wait --page-requisites --no-parent -R "*.php, *.xml" --warc-cdx</pre>
<pre>--warc-file=TYPE_YOUR_FILENAME_HERE https://www.resilient-decarbonised-energy-dtc.ac.uk</pre>
<p>This will download a (zipped) WARC file and an accompanying CDX file (which is an index of the downloaded material). Both can be uploaded to a repository, such as <a href="https://rdmc.nottingham.ac.uk/" target="_blank" rel="noopener">the University&#8217;s Research Data Management Repository</a>, for permanent preservation.</p>
<p>Nb. the download can take several minutes, depending on the size of the website.</p>
<p>&nbsp;</p>
<h3>And there you have it!</h3>
<p>If you have any questions about the above, please do not hesitate to get in touch with <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Introduction-to-Digital-Research.aspx" target="_blank" rel="noopener">a member of the team</a>.</p>
<p>&nbsp;</p>
<h3>Further reading and resources</h3>
<p>The full WGET user manual: <a href="https://www.gnu.org/software/wget/manual/wget.html" target="_blank" rel="noopener">https://www.gnu.org/software/wget/manual/wget.html</a></p>
<p>An overview of the most common WGET commands/options: <a href="https://www.computerhope.com/unix/wget.htm" target="_blank" rel="noopener">https://www.computerhope.com/unix/wget.htm</a></p>
<p><a href="https://conifer.rhizome.org/" target="_blank" rel="noopener">Conifer</a>: a US-based web archiving service, which includes a click-through version of the above process</p>
<p>The <a href="https://www.webarchive.org.uk/en/ukwa/index" target="_blank" rel="noopener">UK Web Archive</a>, a partner of the UK Legal Deposit Libraries and responsible for preserving UK web content for future generations. You can ask the UK Web Archive to <a href="https://www.webarchive.org.uk/en/ukwa/info/nominate">Save a UK Website</a> for you.</p>
<p>For the University of Nottingham&#8217;s own digital preservation activity, notably around preserving elements of the main website, <a href="https://blogs.nottingham.ac.uk/manuscripts/2023/04/17/documenting-the-pandemic-and-beyond-website-captures-for-the-university-archives-available-to-view/" target="_blank" rel="noopener">see this blog</a>.</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2022/04/22/archiving-a-website/">Archiving a website</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Automated anonymisation of texts and transcripts</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2021/11/22/automated-anonymisation-of-texts-and-transcripts/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Mon, 22 Nov 2021 08:42:55 +0000</pubDate>
				<category><![CDATA[Advice and Guidance]]></category>
		<category><![CDATA[Automated transcription]]></category>
		<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Data Analytics]]></category>
		<category><![CDATA[Process Automation]]></category>
		<category><![CDATA[Research Data Management]]></category>
		<category><![CDATA[Anonymization]]></category>
		<category><![CDATA[NLP]]></category>
		<category><![CDATA[Text data]]></category>
		<category><![CDATA[Transcription]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=13772</guid>

					<description><![CDATA[<p>In this blog, we discuss an automated process for anonymising interview transcripts, patient notes, or other free-text data containing personal information.  Colleagues wishing to share participant notes or interview transcripts, for example as publication appendices or in a research data repository, will likely need to anonymise the data. Anonymisation also comes with a number of ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2021/11/22/automated-anonymisation-of-texts-and-transcripts/">Automated anonymisation of texts and transcripts</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="200" height="300" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/man-with-back-turned-200x300.jpg" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/man-with-back-turned-200x300.jpg 200w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/man-with-back-turned.jpg 300w" sizes="auto, (max-width: 200px) 100vw, 200px" /><p><strong>In this blog, we discuss an automated process for anonymising interview transcripts, patient notes, or other free-text data containing personal information. </strong></p>
<p>Colleagues wishing to share participant notes or interview transcripts, for example as publication appendices or in a <a href="https://rdmc.nottingham.ac.uk/">research data repository,</a> will likely need to anonymise the data. Anonymisation also comes with a number of benefits, such as these listed by the UK <a href="https://ico.org.uk/media/about-the-ico/consultations/2619862/anonymisation-intro-and-first-chapter.pdf">Information Commissioner&#8217;s Office</a>:</p>
<ul>
<li>developing greater public trust and confidence that data is being used for the public good, while privacy is protected</li>
<li>incentivising researchers and others to use anonymous information instead of personal data, where possible</li>
<li>economic and societal benefits deriving from the availability of rich data sources</li>
</ul>
<p>We have been exploring ways to automate the anonymisation process.</p>
<p>Below, we provide you with a link to some <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Automated-an.aspx">copy-and-paste code</a> that will anonymise texts automatically. But first:</p>
<h3>What is de-identification?</h3>
<p>De-identification is the process of removing or obscuring personally identifiable information (PII) from a text or dataset. Such data typically includes names, locations, social security/national insurance numbers, contact details, and the like. The process can be approached in a number of ways, but the output usually falls into one of two camps:</p>
<p>a. the masking of PII with labels (&#8220;my name is Jane&#8221; becomes &#8220;my name is &lt;NAME&gt;&#8221;)<br />
b. the replacement of PII with dummy data (&#8220;my name is Jane&#8221; becomes &#8220;my name is John&#8221;)</p>
<p>We will focus on the first of these. Here is a fuller example of what an output looks like:</p>
<p><span style="text-decoration: underline;"><strong>Original text:</strong></span></p>
<p>00:00:02 Speaker 1: hi john, it&#8217;s nice to see you again. how was your weekend? do anything special?</p>
<p>00:00:06 Speaker 2: yep, all good thanks. i was with my sister in derby. We saw, you know, that james bond film. what&#8217;s it called? then got a couple of drinks at the pitcher and piano, back in nottingham.</p>
<p>00:00:18 Speaker 1: that&#8217;s close to your flat, right?</p>
<p>00:00:25 Speaker 2: yeah, about five minutes away. i live on parliament street, remember?</p>
<p>00:00:39 Speaker 1: of course, i remember. you moved last year after you left your parents&#8217; place.</p>
<p>00:00:39 Speaker 2: yeah, it was my sister&#8217;s birthday on sunday, susie, the older one. i told you last time about that new job she got. sainsbury&#8217;s, the one by victoria centre.</p>
<p><span style="text-decoration: underline;"><strong>De-identified text:</strong></span></p>
<p>00:00:02 Speaker 1: hi <strong>PER</strong>, it&#8217;s nice to see you again. how was your weekend? do anything special?</p>
<p>00:00:06 Speaker 2: yep, all good thanks. i was with my sister in <strong>LOC</strong>. We saw, you know, that <strong>MISC</strong> film. what&#8217;s it called? then got a couple of drinks at the pitcher and piano, back in <strong>LOC</strong>.</p>
<p>00:00:18 Speaker 1: that&#8217;s close to your flat, right?</p>
<p>00:00:25 Speaker 2: yeah, about five minutes away. i live on <strong>LOC</strong>, remember?</p>
<p>00:00:39 Speaker 1: of course, i remember. you moved last year after you left your parents&#8217; place.</p>
<p>00:00:39 Speaker 2: yeah, it was my sister&#8217;s birthday on sunday, <strong>PER</strong>, the older one. i told you last time about that new job she got. <strong>ORG</strong>, the one by <strong>LOC</strong>.</p>
<p>&nbsp;</p>
<p><img loading="lazy" decoding="async" class="wp-image-13447 size-medium alignright" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2020/12/Man-writing-300x200.jpg" alt="" width="300" height="200" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2020/12/Man-writing-300x200.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2020/12/Man-writing-1024x683.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2020/12/Man-writing-768x513.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2020/12/Man-writing-1536x1025.jpg 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2020/12/Man-writing-2048x1367.jpg 2048w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<h3>So how can you achieve this?</h3>
<p>Colleagues may already be familiar with <a href="https://scrubber.nlm.nih.gov/annotation/">NLM Scrubber</a> or similar pre-packaged tools. In our case, we have used an open-source model from the <a href="https://huggingface.co/">hugging face</a> community, a repository of pre-trained models for natural language processing. (The output above is based on <a href="https://huggingface.co/dslim/bert-base-NER">this particular model</a>).</p>
<p>With a few lines of Python code, you can produce de-identified text in under a minute.</p>
<p><strong>ANONYMISE YOUR OWN DATA.</strong> If you would like to replicate these results, you can find the code here: <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Automated-an.aspx">https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Automated-an.aspx</a></p>
<h3>Some caveats</h3>
<p>Machine-learning algorithms contain bias stemming from the data on which the model has been trained. These biases can lead to errors like false positives (a non-PII word is identified as PII) or false negatives (PII is not identified, and therefore not anonymised). In the case of the model described above, the creators specifically note that:</p>
<p><em>This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occasionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.</em></p>
<p>Outputs reliant on pre-trained models should always be checked for errors.</p>
<h3>Next steps</h3>
<p>The process described above requires basic knowledge of running Python scripts. Out next step is to investigate ways of packaging the process in a way that requires no coding knowledge.</p>
<p>If you are interested in talking to us about this work and getting involved, please feel free to contact <a href="https://uniofnottm.sharepoint.com/sites/DigitalResearch/SitePages/Introduction-to-Digital-Research.aspx">one of the team</a>.</p>
<h3>Further reading</h3>
<p>Berg, H., Henriksson, A., Fors, U., Dalianis, H. (2021) &#8216;<a href="https://www.scitepress.org/Papers/2021/103187/103187.pdf">De-identification of clinical text for secondary use: research issues</a>&#8216;. <em>HEALTHINF</em>. pp.592-99.</p>
<p>Johnson, A,. Bulgarelli, L., Pollard, T. (2020) &#8216;<a href="https://dl.acm.org/doi/pdf/10.1145/3368555.3384455">Deidentification of free-text medical records using pre-trained bidirectional transformers</a>&#8216;. <em>CHIL &#8217;20 Proceedings. </em>pp. 214-21.</p>
<p><strong>Infographic</strong>: The Future of Privacy Forum, <a href="https://fpf.org/wp-content/uploads/2017/06/FPF_Visual-Guide-to-Practical-Data-DeID.pdf"><em>A Visual Guide to Practical Data De-identification</em></a>.</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2021/11/22/automated-anonymisation-of-texts-and-transcripts/">Automated anonymisation of texts and transcripts</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Creating UoN Research Link</title>
		<link>https://blogs.nottingham.ac.uk/digitalresearch/2021/11/09/creating-uon-research-link/</link>
		
		<dc:creator><![CDATA[Digital Research]]></dc:creator>
		<pubDate>Tue, 09 Nov 2021 09:57:51 +0000</pubDate>
				<category><![CDATA[Collaboration]]></category>
		<category><![CDATA[Data Analytics]]></category>
		<category><![CDATA[UoN Research Link]]></category>
		<category><![CDATA[Research Link]]></category>
		<guid isPermaLink="false">https://blogs.nottingham.ac.uk/digitalresearch/?p=13701</guid>

					<description><![CDATA[<p>In this blog, we talk about how we created UoN Research Link and some of the technical and design decisions taken. What is UoN Research Link? UoN Research Link lets you search for University of Nottingham colleagues according to research interests and expertise. It can help researchers identify collaborators from across the University&#8217;s campuses. You ...</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2021/11/09/creating-uon-research-link/">Creating UoN Research Link</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></description>
										<content:encoded><![CDATA[<img width="300" height="198" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/Suggested_photo_for_landing_page-300x198.jpg" class="attachment-medium size-medium wp-post-image" alt="" style="float:right; margin:0 0 10px 10px;" decoding="async" loading="lazy" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/Suggested_photo_for_landing_page-300x198.jpg 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/Suggested_photo_for_landing_page-1024x675.jpg 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/Suggested_photo_for_landing_page-768x506.jpg 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/Suggested_photo_for_landing_page-1536x1012.jpg 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/Suggested_photo_for_landing_page-2048x1350.jpg 2048w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/Suggested_photo_for_landing_page-scaled-e1636374597320.jpg 1750w" sizes="auto, (max-width: 300px) 100vw, 300px" /><h3>In this blog, we talk about how we created UoN Research Link and some of the technical and design decisions taken.</h3>
<h4><strong>What is UoN Research Link?</strong></h4>
<p><em><img loading="lazy" decoding="async" class="size-medium wp-image-13702 alignright" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/UoN-Research-Link-Logo_with-text-300x165.png" alt="" width="300" height="165" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/UoN-Research-Link-Logo_with-text-300x165.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/UoN-Research-Link-Logo_with-text-1024x564.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/UoN-Research-Link-Logo_with-text-768x423.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/UoN-Research-Link-Logo_with-text-1536x847.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/UoN-Research-Link-Logo_with-text-2048x1129.png 2048w" sizes="auto, (max-width: 300px) 100vw, 300px" /> <a tabindex="-1" title="https://https//researchlink.nottingham.ac.uk" href="https://researchlink.nottingham.ac.uk/" target="_blank" rel="noopener noreferrer" aria-label="Link UoN Research Link">UoN Research Link</a></em> lets you search for University of Nottingham colleagues according to research interests and expertise. It can help researchers identify collaborators from across the University&#8217;s campuses. You can search for any topic of interest and it will return a list of colleagues working in related areas, together with a summary of each researcher&#8217;s interests.</p>
<h4><strong>The underlying data</strong></h4>
<p>Our first challenge was to create a database of researcher interests. Not wanting to burden colleagues with supplying the data manually, we decided to use <a href="https://www.nottingham.ac.uk/dts/researcher/applications-and-tools/staff-profile.aspx">eStaff profile</a> pages, currently the fullest set of information about research expertise at the University.</p>
<p>Not all information on the profile pages is relevant to UoN Research Link. For example, we ignore contact details, teaching expertise, and publications. The <em>Research Summary </em>and <em>Expertise Summary </em>sections are the main ones we use. Colleagues without an eStaff profile page or with no information in the aforementioned two sections will not appear in UoN Research Link.</p>
<p>The data is cleaned and formatted, for example by removing duplicate profiles (some colleagues are affiliated with more than one School or Research Group) and deleting hyperlinks, special characters (like bullet points), professional titles and degrees, as well as common words like &#8216;university&#8217; or &#8216;research&#8217;. This is done using the programming language R, which has good packages for text manipulation.</p>
<p>The connection between UoN Research Link and eStaff profiles is not live (i.e. changes to eStaff profiles won&#8217;t immediately carry over to UoN Research Link), but the data collection process will be re-run every three months over the lifetime of UoN Research Link. This means that changes to research expertise or the arrival of new research colleagues will be captured, albeit only every quarter.</p>
<h4><strong>Generating keywords</strong></h4>
<p>The next challenge was to present researchers&#8217; interests succinctly. Keywords are good for this, but few colleagues have separately listed keywords on their profile pages. Again, we did not want to burden colleagues by asking them to supply keywords, so we generate these automatically.</p>
<p>We tested different methods for generating keywords. We tried extracting the most frequently occurring words in a given profile, but that returned mostly banal terms. The same approach but after removing the 1000 most common English words was also disappointing. Models based on machine learning held more promise. Both <a href="https://en.wikipedia.org/wiki/BERT_(language_model)">BERT transformer-based models</a> and topic modelling NLP models such as <a href="https://radimrehurek.com/gensim/">Gensim</a> worked well and generated sensible keywords distanced from each other (it can be unhelpful when all of a researcher&#8217;s keywords are very similar). However, we found that both these approaches were inflexible in the number of words used as keywords. We wanted an approach that would produce both single words and collocations (and parse these accurately). The keyword &#8216;synthetic organic chemistry&#8217; is more helpful than just &#8216;chemistry&#8217; or, worse, the unintelligible &#8216;synthetic organic&#8217;. In our case, the model that produced the best results was taken from the <a href="https://github.com/boudinfl/pke">Python Keyphrase Extraction toolkit</a>.</p>
<p><img loading="lazy" decoding="async" class=" wp-image-13746 aligncenter" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/keywords-tests-300x124.png" alt="" width="501" height="207" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/keywords-tests-300x124.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/keywords-tests-1024x422.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/keywords-tests-768x317.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/keywords-tests-1536x634.png 1536w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/keywords-tests-2048x845.png 2048w" sizes="auto, (max-width: 501px) 100vw, 501px" /></p>
<p>Each profile in the database is thus supplemented with up to six keywords, generated automatically. These are displayed under colleagues&#8217; names when they come up in a search, providing users with a quick overview of a researcher&#8217;s interests and expertise.</p>
<p>Despite its accuracy, the keyphrase generator still occasionally produces sub-optimal keywords (it struggles on hyphened terms, for example), and we realise that colleagues may also want to provide their own keywords. The site administrators can therefore manually update any researcher&#8217;s keywords. Colleagues can visit the <a href="https://researchlink.nottingham.ac.uk/contact/">Contact page</a> to arrange this.</p>
<h4><strong>Suggesting related researchers</strong></h4>
<p>We also wanted to identify researchers with similar profiles. Colleagues may be working in related areas and not be aware of this. Researchers new to the University may wish to seek out connections. Prospective post-graduate students might be looking for complementary supervisors.</p>
<p>To achieve the &#8216;Related Researchers&#8217; function, we use a proxy measure, namely the similarity of one profile text to all others. Using the R package <a href="https://quanteda.io/">quanteda</a>, we were able to calculate this. Quanteda was used to tokenise the corpus and generate a document-feature matrix. It also has handy functions for removing stop words, word stemming and other NLP processes. We use the <a href="https://en.wikipedia.org/wiki/Cosine_similarity">cosine similarity</a> measure to compare researcher&#8217;s expertise texts, although other measures are available (<a href="https://towardsdatascience.com/9-distance-measures-in-data-science-918109d069fa">Jaccard, Sørensen-Dice, etc.</a>). Here is an example output:</p>
<pre><strong>Name              School      similarity </strong> 
Gosling, Simon    Geography      1     
Grundmann, Reiner Sociology      0.378
Jones, Matthew    Geography      0.338 
Dugdale, Steve    Geography      0.295 
Panizzo, Virginia Geography      0.268</pre>
<p>It&#8217;s worth emphasising that only the profile texts are being compared here &#8211; other contextual information is not considered. We also set a threshold on the similarity measure so only colleagues with reasonably similar profile texts appear.</p>
<p>The entire process for formatting the data, generating the keywords, and calculating the similarity measures represents about 1200 lines of code (90% in R, 10% in Python). This is a work in progress. By separating the database generation from the user portal, we can continue to refine the code over the lifetime of UoN Research Link.</p>
<h4><strong>Creating the tool and user testing</strong></h4>
<p>Before engaging a development partner to make the website, we worked with the University&#8217;s User Experience (UX) team. They helped us think about how to present the data and how users navigate databases successfully. This led to the introduction of various filters as well as the option to search not only by research topic, but also by researcher name. You can still see one of the clickable <a href="https://projects.invisionapp.com/share/3U107XO12MXR#/screens">mock-ups we created here</a>. Our development <a href="https://mackman.co.uk/">partner</a> then refined the look and feel, particularly around the aesthetics of the site. Below, for example, are some ideas around toggle buttons that didn&#8217;t make the final cut:</p>
<p><img loading="lazy" decoding="async" class=" wp-image-13733 aligncenter" src="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/toggles-png-300x212.png" alt="" width="331" height="234" srcset="https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/toggles-png-300x212.png 300w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/toggles-png-1024x724.png 1024w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/toggles-png-768x543.png 768w, https://blogs.nottingham.ac.uk/digitalresearch/files/2021/11/toggles-png.png 1436w" sizes="auto, (max-width: 331px) 100vw, 331px" /></p>
<p>Our partner also introduced the &#8216;try searching related topics&#8217; option (which proposes only topics that actually occur in the database).</p>
<p>Two aspects of this work phase were especially positive and impactful:</p>
<p>1) Adopting an agile approach allowed us to make iterative changes as the site development progressed. This meant that issues could be addressed quickly and workloads could be managed. It allowed us to finish ahead of schedule and with no major setbacks.</p>
<p>2) User testing of an early prototype threw many of our assumptions into relief. The user-testing exercise was absolutely invaluable and made the tool much more intuitive and accessible than we had originally designed.</p>
<p>Working with the development partner and with colleagues from different parts of the University during the creation of UoN Research Link was a pleasure.</p>
<h4><strong>So how does it all work?</strong></h4>
<p>In the background, it&#8217;s quite basic. A user can enter any topic as a search term. The site will then look through the expertise text and keywords of every researcher in the database. If it finds a match for the search-for term or terms, then that researcher will be displayed.  Each relevant researcher is shown on a tile that contains their keywords. Clicking on &#8216;Related researchers&#8217; at the bottom right of the tile will reveal colleagues with similar expertise profile texts.</p>
<h4><strong>What got left out</strong></h4>
<p>Technical, financial, and time constraints meant we had to pare down or sacrifice some hoped-for aspects of the tool. For example, we had considered options for visually representing connections between researchers in a network graph. Ultimately this proved too ambitious (although the idea survives in the Research Link logo).</p>
<p>Presenting search results in a logical order also proved more challenging than anticipated. We explored options such as prioritising those researchers for whom the searched-for term appears more frequently in their profile text. But since these calculations would need to happen within the website or would rely on a huge database of all possible permutations of searched terms, it would have greatly slowed down the tool. In the end, we opted for a random-order presentation of search hits.</p>
<p>We had also wanted to include colleagues based at the Malaysia and China campuses, but were unable to obtain the data in time.</p>
<p><strong>Update 07.08.2023: UoN Research Link now includes colleagues based in China and Malaysia</strong></p>
<p>&nbsp;</p>
<p><strong>A word of thanks</strong></p>
<p>The development of UoN Research Link was supported by the University’s ESRC Impact Acceleration Account.</p>
<p>&nbsp;</p>
<p><strong>Click here to visit UoN Research Link</strong>: <a href="https://researchlink.nottingham.ac.uk/">https://researchlink.nottingham.ac.uk/</a></p>
<p>&nbsp;</p>
<p>The post <a href="https://blogs.nottingham.ac.uk/digitalresearch/2021/11/09/creating-uon-research-link/">Creating UoN Research Link</a> appeared first on <a href="https://blogs.nottingham.ac.uk/digitalresearch">Digital Research</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
