y2clutch / raw_transcripts /lecture_8.txt
elyx's picture
initial commit
dae990d
raw
history blame
47 kB
Nothing like working with your kids at 9:00 in the
morning.
Can you hear me up the back there?
Is it being amplified?
Okay.
Thank you.
So in the last lecture, I tried to help you
understand convey to explain to myself that the input to
the brain from the senses is this array of signals
carried by nerve cells that represent something about the outside
world.
But that array of signals coming in through the sensory
nerves is carried, is constructed over a series of parallel
pathways, each of which carries slightly different types of information
about the outside world.
And that when you get to the cortex, these pathways
are then converge and build maps, topographic maps of the
sensory periphery.
That might be a topographic map of the retina, a
topographic map of the skin of the cochlea, etc..
And what I hope that I got for you as
well is that those maps are distorted and distorted by
the density receptors in different parts of the skin or
the eye.
For example, those maps can be plastic, although we don't
know exactly how much and when.
Those maps aren't.
Incredibly useful for the in the world, they're very good
at representing what's happening on the surface of our body.
But if we want to make movements through the world,
then we have to do something else.
We have to transform those maps into something that is
in a coordinated frame, a frame of reference that is
behaviourally useful.
And the structure of this lecture is to try and
take you through some of the folks in the brain.
So some of the mechanisms that seem to be at
least the starting point of constructing those frames of reference.
That.
I want to reinforce in this lecture as well.
That.
Interest in the signals that come through the sensory periphery
of the cortex are carried in parallel.
These reference frames are also constructed in parallel.
We build many different reference frames at the same time
in the brain.
And what we seem to do is to try to
use a particular reference frame when we need to accomplish
a task like picking up the phone.
In that case, I might use a reference frame which
is centred on my hand because that's where the most
important part is.
So I'm going to take you through those reference frames
in the first part of this lecture and then I'm
going to use those to provide a framework to try
and understand potentially what the mechanisms are for a new
or interesting type of nerve nerve circuit in the brain,
which is that from mirror neurones, which many, many of
you may have heard about.
Now, I said to you a couple of lectures, Alexa,
ago that these lectures were designed when we were in
the pandemic and I was pre-recording them, and I'm struggling
to work out the exact pacing, and I discovered that
to my course this week.
I've what I've done is there's a section of the
notes in your slides called Controlling Movements, which I'm going
to omit from today's lecture so that we can discuss
these more interesting aspects of the brain.
I have uploaded a pre-recorded version of that sections about
10 minutes long and it's accessible for me on Moodle.
Simon and I'd like you to have a look at
it when you have a chance.
It won't be necessary for understanding the rest of the
content of this lecture, although there are a couple of
subtleties that may be apparent.
I want to be the performance of the U.S..
Oops.
I just want to start the selection with with a
recording that I obtained some years ago from sources I
can't remember.
There's one particular reason which I want to show you
this recording, and I'll tell you that at the end
of it.
But it also introduces to the idea of spatial neglect.
Has anyone here heard of spatial neglect before?
Anyone?
Put your hand up.
Oh, in that case, you prepare for your mind to
be blown because spatial neglect is one of the most
interesting unsolved mysteries of brain function.
So this video, this first video should help introduce you
to it, and then we'll discuss a little bit about
that and the mechanism why that occurs.
To be to the point of this of this the
first part, you turn to the benefit of the doubt.
Frequently you start to look at all of the actual
numbers.
Positional alignment serve the purpose of this guide to address
those with respect to state of excellence.
Hold accountable for their support for the weakest example of
the various forms of principle over interesting example.
Just the complication of that simplistic classification to perform this
simple graph, for example, statements that not appear for the
first time to read the code, to describe the patient,
etc. to make sense of this cause some features of
the text, but contextually so.
Another way of testing the directive to ask the patient
to open page because he came over to Iraq and
the patient neglected to use the right side of the
page only occasionally some consideration of extended meditation or whether
is something in the collection.
So this is not behaviour so cheerful coffee perception whether
they succeed in school.
But of course this difficult because because of excessive force
mistakes exceed the features which they create are not really
active.
However, you can get something set up and use that
can be completed in subject.
In this case the patient of social with a reserve
of the device.
Stick with the public, look at the same table for
the right side, etc. but for the patient safety and
security of the same patient Refugee Convention.
These findings, together with the description the patient suggests that
medical detectives actually complete the sight of a doctor, at
least a photo with ways for them to gather, to
look at symmetrically to take.
Okay, So I want to share that video, particularly for
the verbal description from the patient who has neglected completing
those letters on the page.
And you can sense from from that description how automatic
recollect or her description of the scene is.
It's not like she's struggling through all that, something we
think that she does perceive through the letters that she
is aware of.
So this neglect is a really strange and powerful thing.
It's very rare.
It seems to entail the distortion of a perceived space,
and it seems to, although not described particularly here, it
can actually occur in more than one of different reference
frames.
Two of them were were illustrated here, those reference frames,
one with the left side of visual space, and the
other was the left side of an object.
I'll get back to the second one in a minute.
And in fact, it's a very complex film.
We don't understand it very well.
A lot of this work and some of these references
in the slides actually come from colleagues at the Institute
of Cognitive Neuroscience, which is about 120 metres that way
in Queens Square particularly.
Don't drive it.
It's now unfortunately dead.
But a lot of the descriptions came from studying patients
with particular types of lesions that we'll get to in
a second.
This is a very automatic absence of awareness of a
part of your visual field or part of an object
or part of something.
It really suggests that when we construct and model our
vision now interpretation of the outside world, we're using different
types of representations to do that.
And depending on whether or not they are all intact,
we have the capacity to access an appropriate one for
the task at hand.
So as was alluded to in that description, neglect is
not simply an absence of awareness, at least not in
some simple way of being blind.
So, for example, these are two particular examples of how
you might test neglect in some patients, and in this
case, a person, a patient with a left hemisphere stroke
was asked across that sort.
I think there should be right hemisphere stroke should actually
cross out the components of a navel figure.
And they both figure as these figures which have a
structure but are actually made up of themselves of little
structures, in this case, letters like A's.
And so the patient in this case crosses them all
the right hand side of the figure.
However, when when asked to describe the figure can actually
say it's a square or a circle.
So there's some strange things going on in this neglect.
And similarly, down here on the on the bottom, this
is a really lovely experiment from John Driver, where they
did something akin to what you see in the in
the video.
Here's what the patient is asked to to draw.
You can see that when asked to draw a right
hand side cap, they can reproduce that fairly well.
However, when I draw a left hand side, a very
poor reproduction of that cat, this is a very interesting
way of asking the question Is it part of the
left visual field or left hemisphere part of my vision,
or is it the left part of an object?
Very simple test.
Ingenious.
To come up with this, ask the patient simply to
draw.
We draw each of these two identical objects and you
can see that in both cases, the left hand side
of the object is emitted, not the left hand part
of the visual field.
So when the object is tilted to the right, it's
still the left hand side of the object that is
emitted.
So neglect is a really strange and worrying for the
patient, but intriguing for the experiment.
A form of brain damage.
And a lot of that research still goes on at
the Institute of Neurology and the Institute of Cognitive Neuroscience
just down the road.
So what does neglect tell us about representation of the
world in our brain?
As I said, neglect entails an inability to perceive the
relationships of things within a particular frame of reference, a
part of the visual field upon the object and other
ones that I haven't gone through here.
Depending on the spread and the focus of the brain,
there is the damage the affected frame of reference can
be.
Egocentric, olive centric, extra or personal.
And I hope that you understand each of those terms
by the end of this lecture.
It suggests that there isn't one single spatial reference frame
in which the world is viewed through which we see
the world.
Instead, there are several, each with their own neural representation,
each of which can be deployed depending on the task
at hand.
And we expect that it should be possible to identify
the different frames of reference in the brain.
And I should note here, by the way, that most
people that show neglect also show other forms of deficits.
And the reason for that, generally speaking, is that the
brain damage that leads to neglect often spreads to other
areas and other functions.
So to understand then what we're gonna be talking about,
I need to describe to you the different types of
frames of reference that we can describe.
And this picture summarises them.
The two classes are really egocentric for Ellis and egocentric.
It's very easy to think about.
It's just with respect to the ego, to the body.
So, for example, with respect my body, this is my
right and this is my left.
If I turn around, that's to my right and that's
to my left hand to my eye.
If I'm looking at you, then that's part of the
visual world in my right, part of the world on
my left foot.
If I look over there, there's two parts of visual
world to my right and my left.
But who was in the left field now is on
the right hand side of my.
That's an egocentric form of representation, something that's with respect
to my body.
It could be my head, my retina.
It could also be other forms that discover in a
second.
The other major class of frames of reference is our
essential world centric spaces.
So, for example, a GPS gives you coordinates in north,
south, east, west.
That's a world centre in space.
It doesn't depend on the direction I'm facing.
Again, if we go right here, this is my right.
That's my left I'm facing.
So it's my right and it's my left.
I'm facing east now, So although east and south have
not I worked with and south have not changed in
the other central continents, by my figures, in accordance has
changed.
So the Alessandra coordinates base that will based or even
sometimes object based reference frames.
These are things that are independent of our and our
own body position and depend only on the structure of
the world outside.
It turns out that that's the most stable representation to
use because that doesn't depend on where I'm moving.
But we do choose when we describe something in the
world to other people, we often choose to use egocentric
reference frames.
So for example, if I was to describe how to
get out of the building, in my case, I got
that turn left, turn left again, turn right and then
go straight ahead.
That's an egocentric description.
I could say that third, south and east and slightly
north and east again.
And that would be another centric description of the same
towns.
I won't describe much about either centric, but I will
just introduce you to the idea that these two things
can be distinguished in development and in perception.
These really elegant experiments show you that kids originally formed
egocentric representations of the world around them and then gradually
formed these worlds and centric presentations.
So in one in one experiment shown on the top
infants, less than one year, I placed in a room
and the there's two doors to the room and they're
sitting around the table with the experimenter, and their mother
appears at one door and then the child is rotated
in the room.
The room doesn't change.
The charges were rotated around the table.
And the question is, where does the child expect the
mother to peer from from the same door that she
was at before or from the door that is on
the right hand side of that child, which is the
egocentric reference point.
And the answer is that at least in young infants,
they will prefer to look and to expect that for
the mother to come from the right hand or that
is the wrong doors.
But the one that is on their right hand side
is to their egocentric preference example.
Now this Dallas centric discapacidad show which World Baseball and
its fans develop very slowly.
And indeed, some people remain pretty poor even when they
get into adulthood, including myself.
So this task is illustrative of that.
In this task, one would ask a child or a
toddler to indicate on the right here what the view
from this horse is of this pattern of events here.
Is it A, B, C, or D, just have a
loophole here.
Who thinks it's A, The view of the horse is
represented.
Okay.
What about B?
A couple of people see.
Few people.
They have a few people.
The answer is B, with the red, blue and yellow.
Proceed from left to right in the view of the
horse would be to see it from the left to
the right.
So that's an that's a reimagining or interpret the world
from another point of view.
And that's interpreting what from another point of view requires
that kind of understanding, because you have to build that
representation in a world based audience based on your own
context.
So these two abilities, egocentric and polycentric reference frames develop
at different rates and the egocentric comes first.
As I said, I'm not going to talk much about
our centric map here because Hugo is actually going to
take you through that quite a bit.
And I feel like this time when we talk about
spatial memory and other aspects of historical function, but the
fundamental basis of these other centric maps, these cognitive maps,
was actually discovered at UCL by John O'Keefe, whose building
is in the anatomy building.
I think somewhere in the top drawer.
He's still there, still researching.
He must be 80 now.
He won the Nobel Prize a few years ago, but
it hasn't stopped him.
And he discovered, as Hugo described to you, that if
you recall from the hippocampus of a mouse or a
rat wandering around a small wooden box, that the cells
in that hippocampus will often fire in a particular location
in that particular part of the puzzle.
And further experiments show that that the position in the
box where they respond to the red dots and the
thing, the black things, the trajectory down through that space,
red dots, when the neurone fires, we can show you
via various manipulations that that representation of space that is
embodied in that Nero's place cells activity is actually half
a century.
I want to spend the next few slides discussing taking
through some of the egocentric reference frame that emerged in
Cortex.
So I've discussed already.
One might be an extended work from reference.
That is, if you move your eyes from this red
dot here.
If you look at this on the red dot, Wall-E
is on the right hand side of your visual field.
If you then transfer your case to the right to
the dot on the right one is now on the
left hand side of the visual field.
That's an eye centred reference frame your eye moves and
therefore consequently the position of the objects in the world
moves with respect to the centre of gaze and with
respect to your retina, even though they haven't changed position
in the world.
Similarly, this hits into reference frames, things which with respect
my head.
One example of that is audition sounds that have arrived
at the head and are encoded with respect to the
direction of the head and the eyes don't move unlike
the eyes.
And so that reference frame is based.
There's other frames of reference, for example, joint frames of
reference again for reaching out and grasping this thing.
I might like to include the world in the context
of the joints that are required to pick up this
and handle this environment to tell me that the 20.
So a good deal of work in monkeys that are
described in humans has shown that these special frames of
reference I'm almost certainly built to start to be built
in the parietal cortex and prior to the cortex, the
visual cortex, back of the brain.
Frontal lobes are here, temporal lobes are down here in
the proper lobes here, and it's parietal cortex, or at
least this bit of the parietal cortex is important in
generating representing the spatial reference frames found in the particular
part of the parietal cortex around the parietal sulcus.
What this image shows here is a summary of many
studies looking at neglect with left sided neglect and the
density, the colour of the blobs on the side of
the brain that represent the probability effectively that damage in
that part of the brain would have been associated with
neglect.
You can see there's very high concentration around this issue
prior to Super Square.
If you have damage, then you get this form of
neglect or some form of neglect.
So if we we Inhumans, we find out it's quite
difficult to look in the exact new machinery in this
part of the brain.
But it is possible in monkeys.
And it turns out that the newer machinery there in
monkeys looks pretty similar to what we might expect from
brain imaging in humans or from these lesions.
And so we can use that information, these recordings, from
a way about having monkeys be trained to do a
task.
Monkeys can usually perform a much more complicated task.
Another animal such as flies or rodents.
And we can use those recordings to try and work
out what's actually going on in these little brain areas,
particularly those around this practice focus area that's received most
attention so far is the lateral improperly lip might be
coming back with in the next class and even more
involved questions about how we decide to make movements.
There is some evidence that this area, this little area
called LIP, is at least partially involved in starting to
generate or start to move as you place an eye
based reference frame, which is the kind of topographic map
you find in primary visual cortex or in early parts
of visual cortex.
If we want to change that reference frame into something
that is not dependent on when we are looking, we
need to start making some changes to the representations that
we encode this graph.
We actually do a little bit about how why we
think that is involved in starting to transform this visual
tropic retina topic representation to something that does not depend
on the direction of gaze.
I mean, explain what these graphs all show first, because
you're going to see a field in the next few
slides on the x axis.
Here is the time, and you can see here it's
about this little black bar is about 200 milliseconds or
one fifth of a second.
There's several different things on the on the y axis
here on the top is the position of the animal.
Now, he's actually hit fixed in this case, which head
is restrained, but he's able to move his eyes around
because that is the vertical movements of the eye and
the horizontal movements of the eye.
The next block shows you, when the stimulus appears, indicates
the times when the stimulus is on these little rows.
Here are what we call a Rasta plot.
And each row is one trial.
One time the animal performed this task.
And each of those dots is the time of occurrence
of a single action potential from a neurone in IP.
Now, you may repeat that trial many times in this
case, say 15 or 20 times, and you get pretty
similar activity across each trial.
And so when you average that activity, you get something
like this black box below which we call a pair
of stimulus time histograms.
And the height of that box basically reflects the magnitude
of the response of the neurone at that point in
time.
The number of spikes the neurone is discharging on each
trial at that point in time.
And so in each of these cases, you can see
that just after the stimulus comes on, the neurones respond
very short latency about 50 or 80 milliseconds, less than
1/10 of a second after the appearance of stimulus.
That's how long it takes the visual information to get
from the retina to this part of the brain.
Now there's three different trials shown here.
These are all recordings from the same nerve cell likely
in the monkey doing this task.
And the monkey's task is simply to find look at
the dot bit like when you're looking at Wall-E.
But we're not trying to look.
But look at the dot in the letters, the thing
that's all the monkeys required to do.
In the other two trials, the monkey is required to
move the eyes from that dot to another dot that
appears.
So you can see here, that's what's indicated by the
arrow.
And you can see that the monkey is successfully doing
that because it's horizontal traits which are the sort of
the right position, horizontal and position changes just after the
stock comes on.
And that comes on at this line here.
And then a stimulus has been displayed to the animal
at a particular location on the TV screen that the
dog is also placed on.
So in this particular case on the left here, this
is what we would expect from a neurone that is
simply representing visual stimuli.
Responding to a visual stimulus, Stan was fixating flecks of
light appears neurone stops responding.
Pretty simple.
In the second case here, this is a bit more
of a funky task in this case.
Just before the animal makes nine movement, the stimulus appears
on the right hand side.
And here you can see what happens is that the
neurone still responds to the stimulus that appears when when
the eye move to that location.
So that that is also consistent perhaps with the with
the new on including the position or appearance of an
object at a particular position with respect to its iris.
However, if you display the stimulus just briefly at its
new location here, the same written location with respect to
the days before the animal makes the second you actually
see that response physically or even though that stimulus was
placed in the part of the visual field, it's not
normally effective for the mirror.
In other words, this new on CTV anticipate.
In fact, the animal is about to make a second
to this new location in middle place, and it seems
to be reaching out to that new location to see
if there's anything there already.
So these neurones are already starting to disassociate the retina
properly.
Framework that is encoding really powerful visual cortex is something
that doesn't depend on the location of the eye.
It's not a complete disassociation to pinpoint.
So that's like P going to see patent in this
VR trial area eventually further down in the brain, away
from the brain.
And these neurones are really interesting.
Again, this is a monkey performing a task.
And again, the task is primarily to maintain fixation on
a particular point on the screen.
In this case we see the monkey and we can
see from the lateral side he's looking at the screen.
And his task here is simply to look at this
central location.
And while he's looking down, a stimulus is presented that
comes either towards the mouth.
From the top to the bottom, from different locations on
the screen or towards the top of the head, again,
from different locations on the screen.
And what you should see here is that this neurone
is very active for the two situations.
On the left hand side here, there's lots of activity,
lots of spikes, and that's when the object is moving
towards the mouth and not when the object is moving
towards the forest.
If we then change the stimulus slightly.
So then what?
She has to look up here.
You can see that the neurone is too responsive when
the animal is when the object is coming towards the
mouth, not towards the first.
So the activity this neurone seems to depend on whether
an object is moving towards the mouth, not on where
the animal's viewpoint is.
It doesn't matter where abouts in the visual field.
It started as long as it was coming towards the
end of the forest.
So this vector into prior CO areas seems to be
standing to construct this representation of objects in the world
that depend on that location.
With respect to the amount of fibre that's important for
feeding.
Exactly.
And not just the math.
If you look at other neurones in other parts of
the head that represented.
And it's also interesting you report the medial enterprise area
rather than the lateral, the medial being close to the
middle of the brain.
We find exactly what we call rich second frames of
reference.
That is, that neurones are responsive when animals make a
movement towards an object with their arms is also neurones
in there that are also responsible to animals.
Make seconds.
This is a fairly complicated slide.
I don't want to take too much away from it,
but the point is here that this new one, some
of these neurones are active when the animal makes a
reaching movement, but not when it makes an eye movement.
It seems to be this and other forms of evidence
seem to suggest that it was using something about the
coordinate space of the joints to actually represent the outside
world.
There's another area called the anterior inter parietal area, and
we're going to discuss that in greater depth in a
moment.
So to summarise what I've said to you there, there
are multiple frames of reference that can be used to
represent the world.
There's good evidence for the existence of multiple frames of
reference in separate circuits in the brain.
Spatial neglect means losing a representation of a specific frame
of reference for at least one or two, and not
all of them.
But for that reason we can all.
Be aware of objects in a particular coordinate frame.
And the other point here is that there's multiple frames
of reference represented in parallel different areas in the brain.
Simply constructing in parallel with different reference frames.
And the consequence of that is that the could have
been executed or generated in parallel.
When we go to form a task, we can select
immediately or quickly which frame of reference we want to
use to actually complete that task.
We don't have to wait to redo the entire computation
again, take the visual image on whatever I want to
do with this.
I want to reach there.
I want to do this instead.
That preference frame is already being built.
We may not use that reference frame.
We may select another reference frame, in which case the
question becomes what happens to the new workflow detection of
that reference frame that we did not use?
And that, I think, is going to be something that
should become clear in the next part of the lecture.
Is there any questions about that particular component?
Okay.
So skip over a section, which is basically how the
motor cortex controls the muscles.
And I could say that there's a there's now a
video on your little page, which is through that, and
it's five people.
What I want to spend.
The next lecture discussion is how we control and even
understand actions.
And I want to take us through some of the
really interesting what has happened in this field in the
last ten or 15 years.
We've talked about the parietal lobe.
We skip over what I call the primary motor cortex,
which is the actual guts of controlling the muscles.
That's in the middle page.
We're not going to talk too much about supplementary motor
areas, but these are areas which help generate the initial
plans for the muscle movements that go to the primary
motor cortex.
But between these areas, the premotor cortex and the prefrontal
cortex seem to take information from the parietal lobe and
then distribute them to the motor areas.
That's a lot of what we understand about this has
been done in the context of a particular term, and
I think we can grasp something.
Rastafarian or a monkey's case brought a little bit of
food.
Turns out that the circuits for this simply quite similar
in monkeys and humans.
The talk shows a schematic of a monkey brain, the
bottom of the human brain, the areas of interest in
the monkey brain of the those around the enterprise, those
focus in particular the anterior area is a little area
called F5 and also F1 in humans.
The same areas exist.
We know that from the anatomy and from the tracing
of connections between pathways.
We don't really know exactly how signals get from one
area to another.
So we're going to use the monkey to try and
understand a bit of that, but also look at some
of the human pathways.
So this here is in the same kind of way.
Actually, the previous slides, a description of several neurones in
anterior into parietal area during grasping, which I find is
absolutely fascinating because there's one particular type of neurone here
in the bottom right, which seems to be important in
taking that sensory information that's coming up from the sensory
periphery through the visual cortex and starting to transform that
into something that's useful for motor movements.
So this slide shows three separate neurones in each row
and for each new on the three different paths.
The different tasks.
The different columns are to perform a manipulation in light
that is to create something more certain.
See it?
The second one is to reach out and grasp that
thing in the dark.
So I can't see there is no visual information.
And the third thing is just to look at the
object and not reach the ground.
So the neurone on the top here shows falling rates
like we might expect from a visual appeal if it's
near or it's active.
When the animals manipulate an object in the light, it's
also active when it sees the object to manipulate it,
but it's not at all active when it makes the
muscle movements, they can't see the object.
On the other hand, this middle row here is a
kind of demand that we might expect to be important
in controlling the movements that we're about to make so
that no one is active.
When we reach out across the line and is active
when we reach out and draw something, the dog can
see it, but it's not at all active.
When we just look at the optics.
So it's active during the past, but when you look
at it, so these visual motor neurones, it's a kind
of classic distinction between sensory input and motor.
And for the third part of neurone there, which we'll
call a visual motor neurone, combines these two features.
These neurones again are active during the regrouping task in
the light.
These neurones are also active when the animals cannot see
the objects and they're also active when they see it,
but don't perform the movement.
So they have both sensory input seen in the visual
only component and motor inputs in the motor and components.
And they seem to combine, they seem to be able
to combine these two forms into one neurone.
And these kinds of neurones, these visual motor neurones can
be found in different parts of the brain.
The most prominent in the kind of cortex, the frontal
cortex.
And they are, we think, a very important interface between
sensory information.
They combine these two things.
As we'll discover in the next lecture, You want to
combine these two things to make the interesting contributions to
defining about what we want to do.
So this is the kind of neurones you find in
the final area.
You actually find is also further up the chain area.
Fine.
We'll get to that in a moment.
These neurones that can be quite selective for the further
features of the task.
So this is recording from one neurone in this case,
and this is a visual motor neurone that is a
neurone that you can see both the visual component and
motor in front of the task.
In this case you can see that just in the
visual component alone that the neurones are responsive.
This is.
This is during the top three.
This is the actual during the top when they can
see and make the movement.
You can see that these neurones are active for particular
configurations of grasp and not others.
If in addition you look at the visual component alone,
that is in the absence of the tons you see,
these neurones are also selected for particular objects.
How many of you have heard of the concept of
affordable?
Performance is a really interesting thing.
When we design objects, the objects that work, the ones
that we want to use for those are for particular
actions.
The chairs are for the ideas to be.
I don't do that with.
That means phone too, for the idea of picking them
up and scrolling through them.
That is the very particular structure.
Those objects seems to promote the execution of particular motor
planner.
Unfortunately.
They forward those.
Actions.
Maybe these neurones are part of that kind of affordances,
because when these neurones which are helping, which have both
sensory input and motor output, are actually representing particular types
of objects, people, okay, So perhaps they help us not
to generate the plants when we see that object.
One of the plants that I might execute if God
wants, but I might not execute one of the plants,
I might get a clue that that would afford that
particular action.
Indeed, if you look at human cortex now rather than
monkey cortex, you look at it from signals rather than
single unit recordings.
You also see a good deal of evidence for the
presence of areas that are effectively related to performance.
These are the responses or MRI responses in human cortex
to observed axons in the top row.
Here is when one observes actions without associated objects, and
the bottom is when objects are present with chewing, grasping
and kicking.
Again, this is a person sitting in a scanner and
not actually performing his actions that is viewing the action.
We think maybe when they're viewing these actions, maybe they're
kind of replaying them in their head as well.
Or maybe they're viewing them also for to start to
build their idea of a particular course of action.
And you see from these slides here, but I want
you to take away mainly is that there's a substantial
amount of activity not only in motor cortex, but also
in provider cortex consistent with what we see in the
monkeys that when they're looking at particular objects, neurones are
active so that those objects may afford particular actions.
Similarly, if you just look at the particular example of
tools, if they present a hammer to someone sitting in
a scanner as opposed to a house or other objects,
they would do not forward actions.
You find that in prior to Cortex as well as
in premotor cortex, this area equivalent to the monkey even
at five feet, gets a standard manner of activity.
Again, these parietal and motor cortical region seem to be
responsive when you're viewing objects that are full potential for
action.
This experiment I find really beautiful.
You've heard about transcranial magnetic stimulation.
That's the idea where you put a magnetic pulse outside
to stimulate a little bit to the cortex.
When you do that, you activate the neurones, the electrical,
the cortex.
You can choose the pulse and intensity that which doesn't
overtly cause an action, but you can measure the activity
of muscles in the relevant part of the body.
For example, in the hand.
And that's what's going on here.
These will pass through the amplitude of the muscle movement
recorded during stimulation of the cortex.
And these measurements are made.
What people are feeling for different objects.
These are right handed subjects.
In one case of viewing a cup with the left
hand of one face, right hand to the other cases.
Left and right.
Broken handles.
And what you should see here is that the bar
is much higher when the cops are right handed.
When their visual object affords particular action.
Grasping with the right hand activity, motor cortex seems to
be potentiated by viewing this object consistent again with the
from right results because cool and consistent again with the
work and try to poaching monkeys.
So this brings me on then the mirror neurones.
How many people have heard of Mira Nair?
One of the few who you might know.
And if you've read the if you read the Sylvia
Hazily speaking, if you read the review that I've included
in your reading, you'll understand a lot more about them.
Their neurones have gained particular notoriety because they may, they
are hypothesised to be a source of understanding the actions
of others and perhaps even the higher cognitive social events
like empathy.
These are circuits to be active when viewing an object
for viewing someone doing something as opposed to actually doing
it yourself as well as when you're doing so.
So these are neurones originally that were discovered.
The reason I've talked to you about this search for
grasping is that these neurones were discovered by Richard Lockie
and colleagues when they were trying to understand these brain
circuits across that recording from monkeys.
And they noticed that and trained the monkeys to reach
out and collect food objects.
I wanted to know what was happening during the different
phases of movement and plenty of movement.
They're reporting from the pool area of the prefrontal cortex
called a client.
And they notice that a certain fraction of neurones consistently
seem to fire, not when the animal, not just when
the animal restarting something, but when the experimenter was actually
moving or reaching out or grabbing that.
And so.
So in the writing of the classroom, one of the
first neurones that they discovered the classical mirror neurones.
So these neurones are active when an animal is reaching
out and grasping an object that's shown here.
They're also active when the experimenter is reaching out and
grasping the object that's shown here.
Some of these new ones, it turns out, with finer
investigations, were active in particular cases when the animal when
the object was available to the animal's reach and when
it was not available within the animals, when it's within
the animal's reach, that's a very personal space that the
space around you when it's not within the reach of
the expert personal space.
The space is beyond that.
You can see here that some neurones seem to respond
when actions are performed outside of personal space and other
neurones respond when it's in the very personal space.
Yeah.
So the idea here is that these neurones are responsive
to when both an animal is making an action and
when the animal is viewing the action.
So that's where the idea of mirror neurones comes up
in mirroring responsive when you're viewing something as if you
were generating that they embody in your brain an interpretation
of the action potentially that took place or the accident
that person is performing.
That might be important.
For example, imitation, learning, fine imitation like children do.
If you perform an action should take it and learn
how to perform that action.
However, monkeys don't really live by invitation, so weather is
certainly a problem with monkeys.
Another question Immigration could also be important for trying to
predict what someone else is going to do.
If I see someone reaching out for an object and
predicting that they're going to be taking an object, I'll
be able to understand the potential intentions of the other
person.
That's the idea of the founding at five.
Why?
I wanted to introduce you to them in the context
of this pathways that you should be clear to by
now that they're not just on an incline.
They're also found an intuitive triangle area.
They also find another part of the brain.
So there seems to be a whole network of neurones
that seem to be providing not just the actions that
we are making, but when we just view those actions
will result in a positive step forward.
These circuits are collectively known as potentially very neurone circuits
and they are thought to be in some context important
for things like embodiment and empathy.
I do want to make clear, though, that there's there's
a lot of controversy.
ACLU review makes clear as well.
There's a lot of controversy about the role of these
neurones in these functions.
I don't think there's any controversy about the presence of
these neurones.
What is of interest and what is of concern is
whether these neurones are actually representing what another person is
doing with these neurones, basically generating internal but complete the
plan of action.
As I said to you before, we provide different spatial
reference findings in parallel in our brains, we choose which
spatial reference frame we're going to use when we execute
a particular action.
The implication of having multiple parallel reference frames, not all
of which are used, is that many.
Action plans are made and never executed and never whenever
aware of.
It's a mirror neurone that for a neurone that is
actually representing the actions of others.
Or is it a neurone that's representing an on executed
plans that we are protecting.
Which.
Afforded.
To us.
And that's the central question that still exists in the
mirror neurone.
Which are these neurones effectively anonymous, or are they there
to help us interpret the actions of others or not?
And these of course are not there to speak of
things they could well be that they, for example, evolved
or arose simply to provide clues and execute plans that
are co-opted to then try and provide an interpretation understanding
of others actions.
So I really suggest that you do read Sylvia's article.
She's a leader in the field.
And at this stage, I will leave you with those
with one dichotomy.
That's the dichotomy that exists in the field with the
laity who discovered these very new ones originally.
His group of sceptics were neurones.
He's his interpretation, as if this mirror mechanism is fundamental
to understanding actions and intentions, then the classical view to
the motor system has only a role in generation and
by implication the central system has any role in sensation,
have to be rejected and replaced by the view that
motor system is also one of the major players in
cognitive functions.
The motor system helps us embody the actions that we
view around us.
The contract.
The contract view is that from Hitchcock and colleagues, he
spoke with the language specialists very prominent in the field,
and he would state that a null hypothesis is in
this area.
A five is fundamentally a motor area that is capable
of supporting sensory motor associations that are relevant to action
selection.
As I said before, one of these areas that is
involved in generative frames of reference potential, the motor plans
that we can execute this selection.
So encourage you to read those things and we'll leave
that idea of whether or not we're going to contribute
fundamentally provision for thanks and how good they can click.
Click, click, click here to read more effective.
Problems.
With you.
Take me back to the current context in which you
can contribute to that.
I could take some time to.
Come up with something specifically at the control.
Group.
That's one of the things that we suspect virtually all.
Of the.
Idea.
But I don't think to.
Make the point that medical technicians and.
An important part of.
The process.
Of critical thinking about this is something that even.
When I'm writing.
The basic hypothesis.
Is that there's a lot of activity.
Quickly.
Actually get.
Effective.
So it just doesn't make any difference whether.
Or not the significant.
Activity in this market.
Commercial activity.
But you might find indications of.
What it looks like.
It's going to.
Be very difficult to keep.
Up with.
The idea.
That an activity.
By.
Virtue of that are active but actually taking place.
I see.
Okay.
Thank you for taking my question, please.
Okay.
I guess it's like if you read the study as
well, but just.
The active when it's objects moving towards the mouse.
That's what I'm trying to say, but only when they're
moving ahead of the target.
So it's much more different from anything.
Yeah, it's so interesting.
Question Is there any.
More than what is it.
Going to take that we know that we know psychologically,
that whole thing for.
Actions.
Whether that specific you know, whether specific structure can afford
to do for specific.
Comparisons as any work is done in the executive there?
But it's a good question.
I don't think it's coming back.
But I don't know.
But with all the time it takes to do something
like this happens, but it seems to be generated, they're
more expensive.
But again, we recognise that.
People.
Keep working on.
It could be difficult to predict what's going to happen
in Texas because that's not something that's going to come
to grips with the fact that people are killed by.
In particular for the focus groups which are protected in
the construction of the company as well as collaborations.