﻿<!--<!DOCTYPE nitf SYSTEM "nitf-3-4.dtd">-->
<nitf>
  <head>
    <title id="Title">&amp; çâÌæÚUæð´ ·¤è ¥ôÚU Îð¹Ùæ ÁæÚUè ÚU¹ð´ ¥ÍæüÌ ¥ÂÙð ÜÿØ ÂÚU ŠØæÙ ÚU¹ð´Ð ãæÚU Ù ×æÙð´, €UØô´ç·¤ ·¤æ× ·¤ÚUÙð âð ¥æÂ·¤ô ©gðàØ ·¤è Âýæç# ãôÌè ãñ ¥õÚU ÁèßÙ ·¤æ ¹æÜèÂÙ ÎêÚU ãôÌæ ãñÐ ÖÜð ãè ÁèßÙ ×ð´ ç·¤ÌÙè Öè ·¤çÆÙæ§ü €UØô´ Ù ¥æ°, çÁ™ææâæ ¥õÚU ©ˆâæã ÕÙæ° ÚU¹ð´Ð ŠØæÙ ÚU¹ð´, ÜÿØ ã×ðàææ ¥æÂ·Ô¤ Âæâ ãôÌð ãñ´ çÁ‹ãð´ ÂæÙð ·Ô¤ çÜ° ÂýØæâ ¥æÂ ·¤Öè Öè àæéM¤ ·¤ÚU â·¤Ìð ãñ´Ð</title>
    <docdata management-doc-idref="">
      <date.issue id="CreationDate" norm="" />
      <du-key id="rev-ver" generation="1" version="Default" />
      <du-key id="Parent-Version" version="" />
      <identified-content>
        <classifier id="newspro-nitf" value="r2" />
        <classifier id="Newspro-App" value="Epaper" />
        <classifier id="Content-Type" value="Story" />
        <classifier id="storyID" value="" />
        <classifier id="CmsConID" value="" />
        <classifier id="Desk" value="" />
        <classifier id="Source" value="" />
        <classifier id="Edition" value="" />
        <classifier id="Category" value="-1" />
        <classifier id="UserName" value="" />
        <classifier id="PublicationDate" value="20220103" />
        <classifier id="PublicationName" value="Hindustan" />
        <classifier id="IsPublished" value="Y" />
        <classifier id="IsPlaced" value="Y" />
        <classifier id="IsCompleated" value="N" />
        <classifier id="IsProofed" value="N" />
        <classifier id="User" value="" />
        <classifier id="Headline-Count" value="" />
        <classifier id="Slug-Count" value="0" />
        <classifier id="Photo-Count" value="0" />
        <classifier id="Caption-Count" value="0" />
        <classifier id="Word-Count" value="0" />
        <classifier id="Character-Count" value="0" />
        <classifier id="Location" value="" />
        <classifier id="TemplateType" value="1" />
        <classifier id="StoryType" value="Story" />
        <classifier id="Author" value="" />
        <classifier id="UOM" value="mm" />
        <classifier id="IndexPage" value="" />
        <classifier id="box-geometry" value="-7,40,950,284" />
        <classifier id="Epaper-Build" value="Build-No: 2.1.0.9, Dated: 04/12/2021" />
        <classifier id="Application" value="QuarkXpress 8" />
        <classifier id="MachineName" value="TV0254" />
        <classifier id="ProcessingDateTime" value="Mon 03 Jan 2022 07:00:24" />
      </identified-content>
      <urgency id="home-page" ed-urg="0" />
      <urgency id="priority" ed-urg="0" />
      <doc-scope id="scope" value="0" />
    </docdata>
    <pubdata type="print" name="Hindustan" date.publication="20220103T000000+5.30" edition.name="RPAjmCity" edition.area="RPAjmCity" position.section="03012022-RPAjmCity-01-PAGE-03012022_RPAjmCity_01~WS4~" position.sequence="01" ex-ref="03012022-RPAjmCity-01-PAGE-03012022_RPAjmCity_01~WS4~" SectionName="" />
  </head>
  <body>
    <body.head>
      <hedline>
        <hl1 id="kicker" class="1" style="Shoulder" MainHead="false">
          <lang class="3" style="kicker" font="Patrika18" size="12">
</lang>
        </hl1>
        <hl1 id="Headline" class="1" style="Headline" MainHead="true">
          <lang class="3" style="Headline" font="Patrika18" fontStyle="Bold" size="15">Thoughts of dying patients decoded
</lang>
        </hl1>
        <hl1 id="Subhead" class="1" style="Subhead" MainHead="true">
          <lang class="3" style="Subhead" font="Patrika18" fontStyle="Bold" size="15">
</lang>
        </hl1>
        <hl1 id="Byline" class="1" style="Byline" MainHead="true">
          <lang class="3" style="Byline" font="Patrika18" fontStyle="Bold" size="15">BBC Online
</lang>
        </hl1>
      </hedline>
      <summary></summary>
      <quotes>
        <quote></quote>
      </quotes>
    </body.head>
    <body.content id="Bodytext">
      <block>
        <media id="1" media-type="image">
          <media-reference id="tn" source-credit="" data-location="1" ImgOrderNum="" source="03012022-RPAjmCity-01-PAGE-03012022_RPAjmCity_01~WS4~_SubGroupImage_720446704_tn.JPG" Units="pixels" width="50" height="50"></media-reference>
          <media-caption id="Caption1" font="">
            <hl2></hl2>
          </media-caption>
          <media-reference id="tn" source-credit="" data-location="2" ImgOrderNum="" source="03012022-RPAjmCity-01-PAGE-03012022_RPAjmCity_01~WS4~_SubGroupImage_720325568_tn.JPG" Units="pixels" width="50" height="50"></media-reference>
          <media-caption id="Caption1" font="">
            <hl2></hl2>
          </media-caption>
          <media-reference id="tn" source-credit="" data-location="3" ImgOrderNum="" source="03012022-RPAjmCity-01-PAGE-03012022_RPAjmCity_01~WS4~_SubGroupImage_720436736_tn.JPG" Units="pixels" width="50" height="50"></media-reference>
          <media-caption id="Caption1" font="">
            <hl2></hl2>
          </media-caption>
          <media-reference id="tn" source-credit="" data-location="4" ImgOrderNum="" source="03012022-RPAjmCity-01-PAGE-03012022_RPAjmCity_01~WS4~_SubGroupImage_715957792_tn.JPG" Units="pixels" width="50" height="50"></media-reference>
          <media-caption id="Caption1" font="">
            <hl2></hl2>
          </media-caption>
          <media-reference id="tn" source-credit="" data-location="5" ImgOrderNum="" source="03P1 StephenHawkings_tn.JPG" Units="pixels" width="50" height="50"></media-reference>
          <media-caption id="Caption1" font="">
            <hl2></hl2>
          </media-caption>
        </media>
      </block>
      <p style=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">Researchers have demonstrated a striking method to reconstruct words, based on the brain waves of patients thinking of those words . The technique reported in PLoS Biology relies on gathering electrical signals directly from patients' brains.
</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">Based on signals from listening patients, a computer model was used to reconstruct the sounds of words that patients were thinking of.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">The method may in future help comatose and locked-in patients communicate.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">A technique called functional magnetic resonance imaging to track blood flow in the brain has shown promise for identifying which words or ideas someone may be thinking about.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">By studying patterns of blood flow related to particular images, Jack Gallant's group at the University of California Berkeley showed in September that patterns can be used to guess images being thought of - recreating "movies in the mind".</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">Now, Brian Pasley of the University of California, Berkeley and a team of colleagues have taken that "stimulus reconstruction" work one step further.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">The team focused on an area of the brain called the superior temporal gyrus, or STG. This broad region is not just part of the hearing apparatus but one of the "higher-order" brain regions that help us make linguistic sense of the sounds we hear.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">The team monitored the STG brain waves of patients who were undergoing surgery for epilepsy or tumours, while playing audio of a number of different speakers reciting words and sentences.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">The trick is disentangling the chaos of electrical signals that the audio brought about in the patients' STG regions.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">To do that, the team employed a computer model that helped map out which parts of the brain were firing at what rate, when different frequencies of sound were played.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">With the help of that model, when patients were presented with words to think about, the team was able to guess which word the participants had chosen.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">They were even able to reconstruct some of the words, turning the brain waves they saw back into sound on the basis of what the computer model suggested those waves meant.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">"There's a two-pronged nature of this work — one is the basic science of how the brain does things," said Robert Knight of UC Berkeley, senior author of the study.</lang>
      </p>
      <p class=".Bodylaser">
        <lang class="3" style=".Bodylaser" font="Patrika15 Ultra" fontStyle="Bold" size="130">The authors caution that the thought-translation idea is still to be vastly improved before such prosthetics become a reality.</lang>
      </p>
    </body.content>
  </body>
</nitf>