Monday 10 Apr 2017, Prof Anneke Lucassen: Cancer Research UK and the 100,000 Genomes Project
20 people, 1.5hr
The incidence of cancer C, in this country is going up.
2014 the last year of good statistics, just over 350,000 new cases
of all types of C diagnosed . The risk is higher in men than women still.
The incidence list has gone up by 12% since the early 1990s , we don't
quite know why. Probably a combination of some environmental factors
, better at detecting Cs which might have gone away by itself.
We're not dying of other things first. Go back 100 years, lots of
us would have died from other diseases before we got old enough
to develop C.
C survival is improving, overall, total average of all Cs
is 50% of people will survive 10 or more years in the UK, that has
doubled over the klast 40 years, due to treatments and earlier catching
of Cs. There is huge variation in survival between different C types.
Certain skin Cs have a very good survival rate, and brain tumours have
a very poor survival rate. They are completely different diseases
and talking about them as one doesn't make sencse.
C is a disease of cells. Any cell that grows uncontrollably
can become cancerous. Skin cancers, leukaemias where blood cells
overgrow and become cancerous . Gut cells can become cancerous , develop
a bowel tumour . Nerve cells can become cancerous to develop a
brain tumour or a glyoma for example. Cell division is very
important in C. We need cells to divide, to grow from baby
to adult . We need cells to divide to heal when we are cut.
And to replace general wear and tear in our bodies.
Every time a cell divides it has to copy itself , copy its genetic material
and a chance that something goes wrong in that copying process.
Whilst I've been talking we've all made about 1/2 million new red
blood cells , to give an idea of the scale. 12 million new gut cells ,
all happening routinely in our bodies. It is routine and controlled, a sytem
of trafic lights around our cell division , saying go or stop
, finely balanced. When that balance is interrupted and the stop
signal is interfered with, for a variety of different reasons
, thats what goes wrong in C. Then the uncontrolled growth
of cells , that then compete with other cells around them. Squash surrounding
tissues or spread to other parts of the body.
Its not allgenetics that causes our cells to divide out of control
but it plays a part.
The influences that can make cells divide out of control , because of
faults that have accumulated in that DNA. Environmental
influences are important, hormones can play an important part ,
eg estrogens and breast Cs a clear link. Take the contraceptive
pill or HRT that has an influence on the accumulation of faults
in our DNA. Lots of natural self-regulations. If you copy your
cells , by dividing , then things can go wrong just by chance.
ur immune system is more important than weoriginally thought in the
developement of C, in particular certain virus infections.
From damage to the DNA, C can arise. I will focus on inheritance,
picking up on the bits that are important and those that are not.
Nearly all our body cells, look into the centre , with a microscope,
the nucleus ,inside that are thw chromosomes which are bundles of
genes together with bits between the genes, the chromosomes are made up
of tightly wound DNA. The DNA is joined together by the DNA letters ,
joining the 2 strings together. That is what we talk about as a sequence of
DNA, 2 billion of those letters per cell, composed of 4 different
letters . Those sequances of code, determine the messages sent to
our body. If the messages go wrong , thats when problems can arise.
The exome is 20,000 different genes, that are sections of that DNA.
The genome is all our genetic material in one cell, all together, the genes
and the bits between.
The word genome derrives from the words gene and chromosome.
Just 1 letter change in all that sequence can be enough to cause
really dramatic changes to our bodies, but it all depends on where
that letter change occurs. All of us have several different
mutations within our genetic code. If those occur in points of the code
that don't do much , then no consequences. Some of those changes
can occur right now as I'm speaking, a mutation in one cell
then copied to the daughter cell. Some of those mutations are inherited from our parents. Inherited in our cells, 2 copies , one from each parent.
Often if you have a mutatuion in one copy , that might disadvantage
you but alone is not enough t cause a problem because the other copy
needs to be knox=cked out. The other copy can be sort of rescuing
the bad copy, or the bad copy over-riding the normal one.
For different diseases there are fdifferences there. For C , often
the case , you might inherit one copy that puts you at a disafdvantage
but its only when the other copy is knocked out by chance
or radiation exposure or something like that, that the C arises.
All C is genetic but not all C is inherited. Any C arises as the
result of genetic faults in the dNA but most of those faults
are not inherited. The difference between inherited forms of C
and chance or sporadic forms of C is if you have inherited
a C predisposing gene , you start off life at a disadvantage.
In order for the C to arise you need more than 1 mutation
or bits of damage to the DNA. There is a required sequence of lots of
different steps to arise before the C starts. Then if you have inherited one
of them , you start off disadvantaged. Thats why i nthe inherited forms of
C we tend to see C at a much younger age, that in the sporadic forms of
C, because they started with a disadvantage and needed fewer
steps to accumulate , before the C arises.
When we talk of inherited Cs , thats not new . Aldred Warsin? described
a family beteen 1895 and 1915 who had very young onset Cs.
This involved bowel and womb Cs, he described it as an unusual
combination of Cs, we now know today as Lintz? syndrome or
heriditary non-polyposis colorectal C and we know the genes that
you inherit t5hat can cause that. But really we've known about this
for over 100 years. And other examples of familial Cs , we've known
about for a long time , from family histories, that there must
be an inherited component, but only in te last few decades
have we found out what that component is.
For breast C an old headline " Her mother died of it , her aunt has it,she has it,
and her 3 daughters" accompanied with the fact that once the gene
was discovered, the test for that woman spared her from
the risk in surgery that she was going to go for, because of her
terrible family history. She had not inherited the gene that was
in her family. The Angelina Jolie effect , she had a BRCA1 gene mutation
inherited from her mother . Her mother had ovarian C at a youing
age and a wider family history of breast C, after had a genetic test
which showed what the cause was in her family and Angelina went
on to have a predictive test for BRCA1 which she had inherited
the same one, and she went on to have risk-reducing mastectomy
and risk-reducing removal of her ovaries.
The demand for BRCA1 testing and the similar BRCA2 gene ,
went up dramatically after her story.
We receive lots of referrals to our genetics service , please test this
person for these 2 genes. A good thing in the sense that she raised the
profile of people who previously were not getting appropriate
testing. But, what many people don't realise, these 2 genes only
explain 5% of all breast and all ovarian Cs. The majority are
explained by other causes. Its not even staightforward to do
that test to find if you are in the 5% category , because the 2 genes
are both very big and the inherited bit can be different in each
family. So the lab has to trawl thru more than 10,000 letters
of genetic code in each gene and look to see if there are any changes
in that gene that have been inherited , that might explain
a family history. We all have those 2 genes and we all have some
variation in those genes and the lab has to try and decipher what is just
normal variation and what is causing the high incidence of
breast and ovarian Cs. The more we test, the more we realise that we find a
variation but it does not mean much. So we have to be very careful
about saying if someone is BRCA1 or 2 positive, because it may be a spurious
red herring finding. I spend a lot of my time telling women ,
intending to be tested, that it is not as simple as they think.
1 in 3 of us will develop a C at some point in our lives and across the
board for all Cs 95% of those will not be due to a single
inherited factor. So in 95% of cases , there may well be an inheritred
componet but that component is very complex consisting of
lots of different factors interacting in ways we don't yet fully
understand. Part of that interaction will also be protection.
One gene protects a bit here , that one increases your
chances or protect in another environment. We just don't know enough
yet to put all that together into 1 algorithm that says with your particular
genetic combination and your particular environmental exposure in
your lifetime, this is your risk of C type x,y or z.
But the headlines make it sound that we are at that point.
The press are more responsible these days but they often mae it sound
, we found a new gene, go to your doctor, get tested for that gene
and you will know or not whether you will get c.
Its not unusual for someone coming to a clinic waving a paper
with a headline like that, can I have a test for these new found genes please.
From a research point of view, finding a new breast C gene is
helpfull as it gives insights into the mechanisms of the disease, but it
often fails to translate into a useful test, unless it is a very high risk
gene. If the new found gene increases your risk over the next 40
years, by 1% , thats not clinically useful test to have.
Similarly for bowel C for example. Its that bit that is not always
conveyed by the media reports.
The is the Kylie Minogue effect. She had breast C 10 years before the
AJ effect. She also had a gene mutation test , but her test was looking at
expression of a [articular gene on her C , so she could
receive a targetted treatment specific to that gene mutation.
Her gene was not inherited , it was the result of the uncontrollable
growth of her breast C . THat was a herceptin gene expression, that meant
she could be treated by herceptin , as that blocks the growth factor
on the cells and shrinks the C cells more than normal cells.
Thats what we are aiming for, targetted treatments.
The testing is easy but the interpretation can
still be difficult. The tech is ther to sequence our code, just like that,
but the problem lies in the interpretaion of the results.
There is a realistic promise there, but the practise tends not to
deliver , like the headlines would imply.
James Watson , DNA discoverer, " we used to think our destiny
was in the stars, now we know its in our genes" . Now we can sequence our
DNA we will know what our future holds. It is more complicated
than that , we do not have the crystal ball as part of this process.
We might do better to remember a quote from John F Kennedy , 30 years
earlier ," the greater our knowledge increases, the more our
ignorance unfolds". In the genomic age, that is very true.
We know more and more, we test more and more, massively more data,
but what that often does is expose what we don't know better
than before we could do that.
In the last 10 years alone a 10,000 fold increase in the speed and decrease
in the cost of genomic sequencing. In 2001 cost 3 billion dollars
and several years to sequence 1 entire genome. In 2017 you can
do that for 1000 dollars , still going down, and do several
in a day. A phenominal scale of change. People assume if you can do it
faster , you get answers quicker . But you gather a whole load of data
and lack the interpretation. |To interpret this, you need to do lots
and lots of clinical investigations, including other
family membersetc, and the overall costs can really rack-up.
An analogy is comparing fishing and trawling.
We are no longer fishing for genes that we suspect are causing
something from a family history or an appearance.
Say we have someone who has something like the appearance of
Down's Syndrome, we know what bit of the genetic code to home in on.
If you tart off , not knowing where the gene may be, trawling the
entire genetic code, you have a cost effective process than for
your single fish. But you get all sorts of fish that you don't know how to
cook, maybe poisonous , old boots, unexploded bombs, all sorts
of stuff analogous to trawling.
In the USA they are a bit more free and easy with their testing
compared to the NHS here. People pay extra money for a broader
gene test, but they don't have any answers. They find risks at most,
when they were expecting answers. Many headlines in the US,
expressing surprise from the people who pile into expensive
testing and get no answers.
The iceberg is also quite a good representation. The bit that sticks up
over the water are the people with a strong family history of C ,
or a specific set of signs or symptoms. They are more likely to have
the strong genes that give strong predictions. That family in 1895
were sticking out of the water. The vast majority is below the surface,
much less tangible you don't know where it is, the weak genes
and environmental factors that interact in a very complex way,
that give poor predictions in the clinic.
We are tackling some of this hrough the 100,000 genomed project,GP.
Its looking at the lower part of the iceberg or looking insode the
trawl net. We are focussing in on a certain group of NHS patients
that are coming through the doors anyway, that are'nt geting the
answers from curren NHS genetic tests. For those people we will
look through their entire genome , 3 billion letters of it ,
and see if we can find anything there that explains their particular
Divided into 2 groups, rare diseases and the other is Cs. The 2 are very
different. For the C patients , we sequence the genome they've inherited
,in every cell of their body and comparing that to the gene of their
particular C. The comparison will hopefully give us clues
where to target as well as how it may have arose.
In the rare diseases , there are a lot of individually rare diseases
but put them all together , then relatively common . 1 in 17
people have a rare disease. If we've exhausted the normal
testing , then comparing (the often) child's DNA
with the parents genomes, might give us important clues.
The whole project announced in 2012, took a while to get
going. A lot of investment, the plan was 100,000 genomes in
70,000 patienrs. In C studies 2 genomes from 1 patient .
13 different genome centres around the UK and several
industry partners , deliberately brought on
board to try and encourage the developement of a genomics industry.
The Chief Medical Pfficer established 3 advisory greoups to the
GP, an ethics group, science group and a data group, importanyly
they interact. I'm on the ethics group , so some intresting insights into
the ethicl discussions about this venture and testing.
4-fold aims. To create an ethical and transparent program
based on consent. This was an offer to patients , they could only
take part , if they were fully informed about the implications.
Bring benefits to patients and bring a genomic service to the NHS.
And be a first in the world to do so. A lot of genomic ventures
around the world as part of research , but within the NHS we'll be
developing this as a diagnostic tool. The hope was to stimulate
scientific discovery and medical insights by doing that
and to stimulate UK industry and investment .
Scotland is now on board as well , and Wales.
Amy has a rare disease- she will give a blood sample which is
representative of her inherited DNA, could also be a cheek swab.
Then if possible the genome from both her parents to compare it with,
to rule out normal variation. If we found something in Amy that loked
suspicious , like a missing bit or an extra bit, then we check both
parents and find one of them also has it, then the signic=ficance goes
down. Wheras if its new in Amy that is much more important.
The more we analyse our genomes , the more we realise that the
variation is much more widespread than we intitially thought.
The is a study in the USA looking at healthy octagenarians ,
analysing their code and they're finding all sorts of mutations
, bits that would predict nasty diseases and they are healthy .
Our ability to predict, from changes in the code is not as nearly as good
as we originally thought it was.
For the C patients, DNA from the normal cells , unless a blood C,
then compare to their tumour.
There are 2 routes into the GP-C . The familial Cs go into the rare disease
branch, like the 1895 family. If you have a C then you go into the
C arm, with a different type of investigation .
With more knowledge about the Cs, then the blunderbus treatments of the
past , can be refined a bit and made more targeted.
You kill of fthe C cells but kill off a lot of other celss as well,
why your hair falls out, you feel miserable . If we can target the C
cells only , then that is far preferable.
The GP project will collect medical details of the individuals
along with the genetic data. That means we cannot
anonymise this genetic information, it is identifiable.
So the data control is really important.
The sum total for the UK is now into the 20,000s , its going
well. Locally something like 2000 roughly. We've relatively few
results at the moment. This is to be expected. 3 different types of
resuls that may come out of this. The main findings are why you'vr
gone intio the project in the first place, then a bunch of additional
findings that are nothing to do with going into the project in the
fisrst place, a sort of lets offer you an MOT while looking
at your genetic code, to see if there is anything else wrong.
That was controversial as to whether it should be disclosed
automatically or whether people should be given the choice or
whether there is a choice about unknown unknowns.
Then some additional findings along the lkline of if "Amy"'s parents
were intending to have more children, then both would be checked to
see if they were carriers of a particular condition, eg cystic fibrosis,
to see if the risks to future children were increased. The controversial
bit about that, was the results would only be given , if both members
of a couple are carriers. If just 1 is a carrier then the future risk
is not increased and that result would not be disclosed.
This project is not a pure research or pure clinical
venture, a mixture. TRhe rules and regulations of both
are very different, causing no end of confusion to hybridise the 2.
The aim to get direct clinical benefits to patients is clearly a
clinical im, its fundamental to the NHS. But the aim to make
new discoveries and understandings about diseases is purely a
research aim, not what the NHS is set about to do.
To develop a genomic medicine service to the NHS is a clinical
capacity building aim and support companies and researchers to
develop new medicines , therapies and diagnostics is very much an
industry & research aim. So a lot of questions about how
someone can consent to all of these , in 1 go, in a meaningful way,
when you've simply come in for a diagnosis. Is it really ethical to
offer someone a complete genome test that might help diagnosis
when they can only take part if they agree to all of these.
An all or nothing project, sign up for all of it or none of it.
So a novel hybrid of research , clinical , service developement
, industry capacity building. Exciting but I think it also
has its problems. We are trying to target drugs to deal with
particular Cs. Say for patient" A" an ovarian C and they have a
particular DNA variation and drug A is developed to deal with htat
situation , not for anything else. Not a blunderbus treatment, focused
on that mutation. Patient "B" has a different mutation that leads to
the developement of drug B . Patient C might have a totally
different type of C or a different location but be the result of the
same mutation. So looking at the mutation rather than the clinical
picture can be helpful to know what drugs to target with.
Or the same brain tumour in 3 children , might have a different mutation
profile in each child, meanwhile 3 different tumours,in different places, in 3 differnt
children might have the same mutation profile.
We are looking at particular markers that may something about that
particular C, markers for particular drug resistance and markers
of particular side effects, they can then be stratified into different
types of patients and each gets his corresponding tablets for their
The use of genetic data and medical records is a topic of great
debate , at the moment. The scandal around the caredot data ? issue
where the govt had to backtrack pretty swiftly , about sharing
medicine info is relevant to this new venture, that wants to
gather data from the population and link that to medical records.
A rock and a hard place situation because , without that massive
sharing process , we will never know the answers.
But with that massive sharing , there are risks of privacy breaches
and how do we allow people a meaningful choice but at the same time
get everyone buying-in. People started opting out in the caredot data
situation , then the data resource is not going to be there
to be useful to future generations.
Big data is crucial to the understanding of the bit of the iceberg
below the water. So its great with a very strong genetic character that
causes a very clear clinical picture, or strong family history ,
but the more subtle interweaving different factors we've gotr to
collect data on a large scale. It may be that national is not enough
from just the NHS is not enough to get statistically significant
data , and we have to go international, and then crossing those boundaries
exposes a load more problems. So how can data sharing be developed
, retaining trust and confidence of public and participants
and that is a moral, regulatory and technological challenge, with
no easy answer.
In my group in Soton , we're looking at the people recruited to the
GP , asking them some of those questions. Through questionaires and more
detailed interviews to see what people think. Some early findings
is what the health professionals and the researchers expect patients
to say isn't necessarily what they say.
Picture of a man walking his dog alongside some water and the dog is in the water.
So should I tell him.
A nice analogy for the genetic code situation . He might know,
absolutely comfortable with the fact his dog is having a swim and knows
the dog is there. Or the dog might be struggling for life.
The issue about analysing sonmeone's genetic code , finding
something out about them , maybe relationships to other
people , raises the same sorts of questions. When you are a holder
of such info, do you tell people , or is it something they
don't need to know , something they don't want to know or want to
know everything .
All sorts of ethical and privacy questions arise , moral issues,
insurance issues and potential minefields. I run a group called
the Clinical Ethics and Law Unit at Soton . We do research focussed on the
ethical issues , raised by genetic and genomic testing and all sorts of interesting
issues about how info is shared within families .
Perhaps you might like to say something about epigenetics and the way
scientists have been humbled after they said a lot of junk DNA
does nothing, and now they find it does do something?
And the ethics of telling people, I had some 23&me test and they have a part
where you can look at it if its serious or not. I wanted to look at it
as you can always adjust your life style with the foreknowledge.?
It can be better to know and it can be worse to know.
If thr eis something you can do about it, the argument is much
stronger. A treatment, an intervention, a lifestyle adjustment
that may change that. There ae bits of your genetic code thay might tell
you are at risk of something , that you can do absolutely nothing
about. It may never eventuate anyway. 23&me does alzheimer
gene testing and at the moment there is no treatment for that.
It might give you the opportunity to say yes or no about finding
out. But when a number of members of a family do that test,
then you have to tnink about other peple finding out.
Were you only testing people who cane to the hospital
or from the general public, as I put my name downcfor it
and never heard anything about the GP.?
Its not the general public , its people with particular
Does it give a bias, that way?
The aim is not to look at the whole population , lets look at the
low hanging fruit, if you like. If we look at the whole population,
we will find a lot of gnetic variation , interesting, but here we're
trying to find new diagnoses.
Epigenetics and junk DNA?
Epigenetics are things that affct the expression of your
genes , without changing your code. So sommething
binding to your code , alters the regulation of a gene, that
is farther down. It might be something sticking to your code
and silences a gene or makes it over-active.
Epigenics is often propogated across the generations ,
such that if you inherit a particular sequence from your mum
, it behaves differently if you inherited it from your
father. The exact sequence might br the same but because of diffeerent
things binding to it, that we cannot stil ltest in a whole genome test,
it will behave differently. There is a rich and emerging study of that.
Originally the GP was to collect what was called other-omic sam[les,
but in practise it has been too difficult to do, its still an aim
, but not happening routinely at the moment.
Junk DNA was a term used 20 or 30years ago , genes send the
messages , when genes go wrong , the message goes wrong - nice
and clear cut. The bit in the middle doesn't do anything , actually we
now know that the bits in the middle are often important agin
in regulating things if something is bound to them . You might get a
promoter of a gene or a silencer of a gene, thousands of letters away
from the gene itself. Only now are we finding what and how it does.
There must be bits of DNA in me that are silent, never do anythiong
but in someone else wil ldo something. Junk DNAd oes exist
, just much less clearly delineated than we originally thought.
That swhere the JFK quote comes in nicely.
The very basics. I'm assuming that C starts from 1 errant cell
but can I also assume that happens quite often but never develops to
2 cell or 4 cell , so epigenetics can come into play in that
early stage. ?
By definition its not a C then, it is not growing uncontrollably.
The pre-Cs may go away by themselves . For example a very coomon
ductal carcenoma in-situ in a woman's breast will , we think,
often regress by itself. But now we are better at screening
for things , its a rare surgeon , who would leave that untreated,
because it might go on to be a full blown C and spread to other parts
of the body. You've got protective factors , control mechanisms
that , may allow things to wrong for a little bit and tyhen
kicks in, and retains control again. THe immune system is vry
important there. The more we learn about it , the more we
realise some of those stop checks and signal is your own body
recognising that the cells have changes so much
, that it looks like its infected, and so needs attacking. A good control
mechanism that needs getting on top of.
So young kids or teenagers , they all could potentially have a C
any day , a number of times a year, but it never develops.?
Yes. If youve inherited a mutation , that just starts you off at a
disadvantage. You may not even with a really strong BRACA1 mutation
you may have enough protective factors around , to never develop
Typically how many point mutations does it take to get to
, presumably on some occassions just 1 critical mutation might.?
In the classical types of C, 1 mutation , such as described by Nudson?
a retinal blastoma , a childhood tumour where you're born with 1
mutation and just waiting for the othe rone to hit , and the same copy
of the gene , so both of your genes are knocked out and you develop
a tumour at the back of the eye. So inherited 1 , that alone isn't enough
and the second 1 is a chance one. In say the case of bowel C,
people don't know what typically is, but 4 or 5 is usual.
It depends where they happen, just drinking our pint of
beer we might be knocking off a few, starting a few
mutations off, but its the critical bits of the DNA
if they happen, then you need far less hits than if non critical bits.
So you're saying , a single point mutation can't give you a C,
unless you already have one?
I just don't know, I don't think that study has been done.
I think its pretty unlikely that 1 point mutation
would be sufficient because you still have another copy of that
particular gene, that is knocked out.
That depends on the bodies physiology, being able to say we won't
use that one, we'll start using that one?
Thats part of the deal of your bodies physiology, it does
that, yes. Again it does depend on the gene, but for most of them,
the point of having the other one, it can compensate.
Sometimes you're right one gene is so bad ,it overwrites
the good copy, but that is not the usual mechanism for C.
When I looked at mine, some things can balance out. I had
haemiacromatis ? where the body takes in too much iron
and the other was a type of mild leucaemia, another one was
thrombosis , a lot to do with the blood.?
The 23&me originally started as looking at your ancestry ,
how much of a neanderthal you were , block background.
Then it started looking at common variations and those genes
subtle risk factors , they are nt high risk predisposing genes.
Nothing in 23&me apart from looking at some Jewish
mutations for breast C, all the rest are subtle risk
factors, that don't do very much. The problem is, if you have
an over the counter test, to tell you a lot of medical info ,
it can only go so far. Thats where it came into problems with USA
,the FDA first said we don't want you doiung any health
related testing, because we think that should be handled by the healthcare
system. just recently its been approved again, but I would urge
caution. Because they're selling point is knowledge is power,
an easy slogan to buy into , but power to do what?
Its all very well, doesn't cost much , probably not going to harm
you , but will it benefit you much.
They came up with Alzheimers, Parkinsons BRAC1 & 2, so i'm
watchong my diet and it does make me research internet things that
are happening in those areas , like anti-malarial drugs against
Of the people consulted for the project what proportion
decided not to sign up to it?
Very few people said, I'm not signing up for that. The question is
are we in some way coercing people to take part.
They are people who have come throught the health service
, not the general publinc interested in finding out. They are ill
or an ill person in their family and they want a diagnosis
and this a way to a diagnosis. But at the same time they have to
sign up to all the other bits. But have we twisted their arms into taking part
, when under totally neutral circumstances , they would not have joined.
A few have said the whole data-stuff is too much for me . There are quite
a lot of people who don't turn up for their appointments, so they
might be voting with their feet. But of those they usually do rearrange
for another appointmet. Of the people suspected of C,
across the board , pretty much everyone says yes. A lot say they go
ahead becaus eit will help advance knowledge. You're not going to say to a
C person, this test is going to revolutionise your particular treatment
or diagnosis , its mor efor the future. The rare disease arm , including
familial Cs , its much more sold to people as a potential
diagnosis , that they won;t get throgh the health service.
If someone has developed C ,brought about by smoking,
would he be invited to the study?
Probably not. There are very specific recruitment criteria,
that have broadened a bit after realising how difficult it was
to get people to take part. But we as health professionals
, finding the right people, in the right ciircumstances .
We are looking fo r people t o offer it to , rather than
those we are forgettingto recruit. Its probably true that the people
focused on this project , recruit people to it, wheras
a jobbing GP or non genetics medic might not thionk
about that. eg psychiatrists , certain psychiatric conditions are
elligible to be recruited, but I'm not sure there is much flow
from psychiatry into this project at the momet.
Do the medico/genetic R&D companies , do they get
access to biopsy samples , to try their medicines on or do they
get potentially nice compliant guinea-pigs to try their
At the moment the GP is organised like a reference library. You can go in
, read the book , but you cant take the book out. That is to reassure
people, they don't have access to the patient to inject them with all
sorts of drugs, just to look at their genetic code and perhaps just the
results from biopsy samples, so they never get biological samples.
The gut genome seems to be coming influential/fashionable. ?
We don't have the evidence yet to see how influential, certainly
lots of headlines. It is promising but not influential right yet,
as we don't yet know what it could influence.
That is looking at your microbiome , your gut flora .
Again it seems a bit like junkDNA , as we used to think.
What you shit out is out of your body and now irrelevant,
but it turns out , its important what the balance of the bacteria
in there is. There is nothing in pour body that is straightforward,
and working in isolation. Its a subtle set of checks and balances,
and very rare that you can dsay , this factor will cause that definitely.
That factor in conjunction with other unknown factors might increase
Perhaps a quantum computer will be able to sort it all out?
Thats what people think, in the more data they get, the more likely
we will get an answer . But i suspect a lot of these things will not be
amenable to computer power, so many variable factors like the
environment in your mother's womb, where you lived,
your particular mix of racial ancestry, diet as a child , the food
you had yesterdy.
Genetics and the environment, you touched on , but much research there?
How you document people's exact environment , unless they are
strong influences. Look at smoking and how long it took us to
make the connection between smoking and lung C.
A strong risk factor, imagine a risk factor much more subtle , identify
when its a risk factor and when its not but is protective factor.
We're now in bigdata world, and large cohort studies over the
years, I wondered if that could be tied into the gentics?
I don't know about better but in combination , cohort studies ar e
very important. There are lots of moves to do genomic analysis
of cohort studies, definetly. A lot has been written about the
difficulties of just say a prope food intake diary, and make that
reliable. Cohort data is probably the best stab at the moment,
but its still easier to find the big strong factors than the subtle ones.
We tend to forget that genetic factors can do 2 things, they can
say increase the risks of C, but on the other hand they can
decrease your chances of something else entirely. How do we
balance al lthose things out. We often see it in families where
they've inherited something that sounds really awful , why
has evolution not got rid of this, probably because its also
protecting from something else.
Could you explain what cohort studies are?
A posh word for following up people or families , over a long
period of time. Rather than saying we'll take 100,000 people ,
and analysing their genome. Like the Southampton Women's study ,
following women, as they have children, then 5 years later,
10 years later.
The POSH study is a breast C study , the age of diagnosis
and their genetic code, is not a cohort study.
It can be a very specific cohort , just people with osteoporosis
study . We're watching that specific population to see what
happens in their futures. So a statistical analysis ,
when you've collected the cohort. It could be people
born on a particular day in 1958 ,then follow onwards.
Some of those are still going strong . I think setting them up
now is much more difficult , people worry more about
the privacy, data protection etc.
The dietary studies are funny because the ask like, what were
you eating 5 years ago?
You just don't know.
And you don't even know what they were putting in food 5
years ago, put it under a different name even?
Also what we cooked our food in, say aluminium pans,
the chances are we absorbed Al which may be very
bad for our health. But its more likely you will
take in Al if you cook acid food . Factors like that will
interact with your genes. So we could think of a particular toxin ,
get 10 people to eat that toxin, and some people it won't
affect at all because somethong in their genetic code that
protects them from it, or it does not do the same genetic
Like sickle-cell anaemia and malaria, protects in the carrier
state against malaria.
There was a major public health incident in Camelford , Cornwall
wher e a lot of people took in a seriously abnormal amount
of some sulphate in the drinking water. At autopsies later
on , illustrated that. Would a follow up study of that
population have been a cohort study?
Cohort is just a longitudinal look at , rather than a cross-sectional
look out. If you followed them over time that would be a
cohort study. It is a fairly loosely used term.
You can go in and study 100 people, thats just doing a test.
If you follow 100 people , however you might have selected them,
then that constitutes a cohort.
Then there are people who come in from another area and
mix up the gene pool again. If that is not taken into consideration,
how fare your results?
For example thw Asian population coming to the UK, their incidence
of certain diseases change quite dramatically. So we thought that must
be due to environment, it can't be genetics. But its still
a combination of the 2. Nothing is ever just genetics or just
environmental, maybe the majority to one side or the other, but
always a mix.
But you can get a prevalence of a disease gene mutation in Ireland ,
Sweden and Japan but they are all geographically separate?
If you look at smoking, we will all of heard my grandad smoked 60
a day for 60 years and he only got C whan he stopped smoking,
or he never got C. There will always be people who can do some bad thing like
smoking and get away with it. Probably because there is something in
their genetic code that protects them from the damaging effects that
get to other people.
There are about 300 different things in cigarettes?
You talked of the study of healthy octogenarians , who had gene
faults but never materialised into anything. Are you concerned that
with the advent of genomic medicine becoming cheaper and quicker,
do you think there is a risk or danger of premptive or preventative
surgeries or treatments, might happen to people that would never
need such interventions?
Yes. Thats where the fishing v trawling comes in. Styart off with a very
strong family history, then you find a mutation ,then its a pretty
good bit of advice , think about risk-reducing surgery for example.
But if you start by analysing the genome, and finding an alteration ,
then the data coming in now shows that they're ina different
boat but they feel themselves to be in the same boat.
A woman with a BRAC1 mutation from a sequencing but without
a family history , might think she has the Angelina Jolie gene,
but the evidence seems to suggest that that woman's chance of
developing breast C is much much lower , than someone who
comes with a strong family history, because she has other
factors that protect against it. So doing screening for the whole
population , then having your breasts or ovaries remoived is
going to be the wrong advice . Its such an important point and we've
not got there yet. The business of additional findings from the GP
, people coming in with a child with learning difficulties , offered a
BRACA test , as a by-the-way freebe, those women may not go on to
develop breast C , but if she gets that result and then feel as being
in the Angelina Jolie boat, they are likely to seek that sort of
intervention. I'm worried about that.
Do you think there are enough safegaurds in place?
No I don't , not at all. Hopefully we will get there as mor estudies
come out . Its now been calculated that we each have 5
serious mutations in our individual genetic codes, that won't
cause many of us , any problems at all. As that becomes more widely
known I think we will be more cautious. People tend to think
of the genetic code as a blueprint , that once we have the readout ,
we'll know wha tto do with it. But you need the readout , with the
family history and signs and symptoms , to interpret it.
Those 2 really need to go together , that is the thing that is not
widely understood. That message is one of the key messages
we ned to try and got out there more.
As far as your ethics panel. Thinking of the Angelina Jolie
case of breasts and ovaries removed and now her marriage
has broken down. Not necessarily related , but there
could be psyschological factors coming in .?
We shouldn't go there .
For anyone , at an every day level, its a big responsibility , as their
lives are being altered?
And also on the level , people tending to think , that having a breast removed
as just a boob job, 10 a penny. But having your breasts removed
, as risk-reducing surgery , has about a 30% complication rate ,
and people don't quite hear that bit, they just want rid of them.
You don't know what other hormone effects there may be for the
rest of the body. You cant have a total
clear-out , there will always be a bit of your body that is at risk
and you cant remove it all.
Is there any effort to educate the public as a lot of this
is about expectations, and not really from knowledge?
Things like this talk is something like we should be doing more of.
We must not sit in our labs or ivory towers just saying this.
We need to go out to engage the public . I think the group
behind the GP , Genomics England , they have tried hard to engage
more . There is criticisms of them - a great big juggernaut ,
moving in a clumsy way . Its a bit conflicted as on one side
it wants to recruit lots of people , sell its wares and at the same time
urge caution ,an uncomfortable mix. The story we get about
genetics from the headlines is not realistic, we need to find other
ways to get that story more realistic. The hype about gentics has
not died down. Usually with a new medical developement , a lot of
hype and then it dies down. The reason behind the genetics hype is because the
technology has kept on faster and we will get the answer with teh
ext bit of kit . There is something about genetics that is different I
23&me they ask a lot of preliminary questions and then feed back to you
those questions and replies as answers.
A number of studies into those direct to consumers companies .
If you send the same DNA sample to different companies you
get different results back.
Worse than that, if you send same samples to different companies along
with a description to one as being a young fit woman , and the
other as an overweight elederly woman , you get very different
results back , as they use that , to make their predictions.
Monday 08 May , Dr Thomas Kluyver, Soton Uni : The Southampton Sailing Robot Project
26 people , 1.5hr
I was asked to get involved with a robotic sailing project as I'd done a bit of
sailing and I liked fiddling with computers. 9 months after that I was on the
way to Satanstead airport , to fly to Portugal for the World Robotic
Sailing Championship , 2016, a whole lot of fun.
With me tonight are Tony and Sim who were also part of the team,
and a number of other people , 9 in total and 7 of us
went to Portugal. Of the 9 in our team , all were of different
The first thing you need is a boat. We initially thought we'd build a boat,
but that is difficult and time-consuming. There is a community who
do remote control sailing , including a class called the 1m class,
1m long . Plenty of these already made and we bought a secod-hand one,
for something like 200GBp. There are 3 sets of sails, for different
wind conditions, smallest for strongest wind and smallest for weak winds.
Nice thing about an r/c sailing boat , it already has the servo motors
to move sails and rudder , a chunk of the work already
done for us. So the bits are radio receiver with aerial , about an inch long.
2 servos, the one with the large round bit is the sail servo, pulling the sail
in to the boat centre , then a more standard servo that turns the rudder.
Then you need a computer to control the robots. So a Raspberry Pi,
a tiny computer 2x3 inches with processor , memory plus a removable
memory card for the programs. You can connect it up to a
network , no screen or keyboard but once its connected
to a network we can talk to it from standard computers, move the
programs onto it, get data off it, tell it what to do next.
A lot of exposed electronics, which don't mix with water
, especially salt water where the competion is held. So Tupperware
boxes to keep most of the water out of it. Wires through holes made in the
box to the servos and sensors, holes sealed with gummy stuff.
We roughly cut a large hole in the hull side , so we could
slot the computer inside . When sailing the joins to that panel
covered in tape , as waterproofing. And more tape over
other places where water could get in. The brain of the operation got
called Brian. The boat is called the Black Python , like the
Pirates of the Caribean , Black Pearl, but the computer language we use
For the sensors we made one from an off the shelf windvane, glued to
2 ring shaped magnets to it, coupled to a board that senses magnetic fields
so we can detect what orientation the magnets are in , and so which
direction the wind is blowing across the boat. Mounted on the
top of the mast about 2m obove the deck, to avoid the sensed wind being distorted by the
sails. Under the hull is a weighty blade like keel to counterbalance the
lean from wind pressure on the sails ,across the hull.
We need a GPS . Al lthe competion challenges require negotiating
around marker bouys that we are given
GPS co-ordinates, Lat and Long. The boat needs to know where it is
, to go to where it needs to go. This GPS is also on the mast,
about 2 inches long , this one otherwise used for high altitude
ballooning , apparently that type works well for this sort of app.
Cost is only something like 7 or 8 GBP.
A compass to show which way the boat is pointing, an accelerometer
to tell if the boat is leaning and you need that to adjust the compass,
a board about an inch square, MEMS micro-electro-mechanical sensors.
A way to get physical data into a form that can be electronically
processed. We have to calibrate the compass for each use, 2 people
holding the boat and turning it in a circle, the calibration dance.
Now how do we put the bits together and make it sail.
Why is this an interseting challenge, why difficult for a robot.
There are othe rchallenges where the boat has motors, there is a Soton
team called hydro-team , boats with motors. So the control to go
from A to B is pretty straightforward. Point it in hte right
direction and tell it to go.
For sailing , it is dependent on the wind direction , it can't go
straight into the wind, 90 degrees to the wind or have it behind
and modern boats can go 45 deg to the wind. A better boat will
let you go closer to the wind. You have to zig-zag to go into the
wind, tacking. 45 deg to the wind one way , go about 90 deg
across the wind and sail on the opposite tack .
Eventually you get where you need to go. This is where control
of the sail position comes in . Running in front of the wind,
you can let the sail out as far as it will go , near enough. It acts
as a big bag, like Viking long ships with a big square sail .
These boats go fastest at about 90 deg to the wind by
putting the sails at about 45 deg, then the sail is acting like an
aerofoil , like the wing of a plane. The wind is perturbed around the
curves and pushes the boat forwards efficiently.
If you want to sail close to the wind , you pull the sail closer in ,
it will keep you going forwards. This is partly the function of the keel .
If going across the wind , you don't want to drift in the direction of the
wind, sideways. The keel blade helps to keep you straight.
We did most of our tests at Eastleigh Lakes near the airport.
We also borrowed anothe rboat , to test out our control
systems before our competition boat was ready. That boat was
simpler with 1 sail, our main boat has 2, mainsail and jib.
Some boats the sails are controlled separately but our boat
, the 2 sails are controlled from 1 servo, they move together.
There is asome ability to adjust them separately
, for when we set-up the boat , we can adjust where the sheets are
connected . Once its on the water they go togehter.
We ended up with 2 borrowed r/c Lasers and we could test the
r/c on one and the control systems on the other.
A big pole has a wifi antenna on the top so it gets us
better range and stay in contact with the boat during
testing. Requiring keeping rain and sun off the control
laptop. The antenna is very directional , requiring it
being pointed to the boat , which can get a bit tedious.
During the competition , this kind of contact was intermittant
, we were trying to keep a wifi connection from the bank
but the challenge area was several 100m away from us
and a dodgey connection. But the boat does not actually
need the wifi connection , it was just for us to know what
was going on onboard. Once we set it going its then totally
autonomous. We have the original r/c system still in place as an
override. Also , in the competition, you are allowed someone in a chase
boat who could intervene if things go wrong, like crashing into something ,
other than tht you have to let it do its own thing.
We used open source stuff for the Pi, the Python language our source
code in GitHub, a www repository of it contains all our code ,
and you can see what we're doing wrong. The key component,
to make it work is ROS, the Robot Operating System, the version
we used was Indigo Turtle with ? shell . ROS principle is there
are nodes which are separate programs , running things, and they
talk to each other. One program just controls the servo motor,
one that just gets data from the wind sensor, they send out ROS
messages which the other programs can listen to . It makes it easier
to separate out the bits needed for the robot, so if the compass bit crashes then
ROS knows how to restart that , so it does not mean everything has crashed,
just that one part has crashed. A lot of people program robots and end
up re-writing everything from scratch, so ROS means you have
pre-written bits which can be shared between multiple robots
. So if we've written something that works particularly well, say
determining how to tack up-wind, then someone else could use that and
plug it in to their own sensors and things, which might use data in a different
format. But a standardised interface that lets you take and combine
different bits of code.
Q: With the sail servo , do you have a position sensor for sensing the
At present we only know how much sheet there is in or out.
So if the servo drives to one extreme, you don't know about that until
other things start happening?
Yes, the wind sensor we can see which way the wind is coming from
but in the edge cases , where you jibe, then you don't know exactly
how the sail is set.
ROS lets you define the launch file , which tells all the nodes that
we want to start. There are different launch files for testing of
calibration of sensors and then for actual sailing in ernest.
There are parameter files , containing settings for different sets
of sails , for different courses . The co-ordinates we want to go to
are programmed in , via the parameter files . ROS makes
easy ,the monitoring and the analysis. Al lvery well putting the
boat in the water and try nd make it work , but often when
you are sailing , there is no time to work out what is going
wrong. Without that, when you get the boat back , you would
not have the details of during the error performance,
you would then only have the memory contens of the boat
turning round in weird circles and would then hacve
to try and piece together , what the system was doing that
made that happen. ROS has a bunch of useful stuff to
give you more info about what the boat was doing , centred around the
tech calles ROS-Bag ? a way of recording all the messages of the
different parts of the boat are saying to each other , wind is 20deg, compass is
170deg at a particular moment, all recorded on the Pi .
Then we get the boat back , we can pull that data off and plug it
into various things to analyse it, tools like ARCU2? that can show us plots
of angles over time , a map of where the boat is, relative to markers
for the course. We also wrote our own stuff to help with theis ,
an HTML dashboard , a live view of what was going on the boat, on
computer or a phone. This was helpfull a couple of times in the
challenges, people in the chase boat could pull out their smart phone
, connect to the boats wifi , and view some of the key parameters from thr
boat. We also wrote a set of ROS nodes for simulating what the boat was
doing , so we could test the boat code without having to place it
on water every time. The nodes for the boat itself take the inputs
of where the boat is, what the wind is doing, and polls? output of what the
boat should do now. The simulation nodes can complete the circle.
Take the input of what the boat wants to do now , and then update the
posistion and heading of the boat. In the simulation we make the wind non constant , as happens a lot in reality , which makes sailing so much more
confusing than the simple diagrams of wind just from one direction,
A map view video of the waypoints the boat was going through, from the
recording of Rosbag messages along with other data to work out what the
boat was doing at that point, why it was nit doing what we expected
it to do. Sept 2016 in Viana del Costelo ? in north Portugal.
The River Lima? with a bridge designed by the designer of the
Eifel Tower. We launched from the bankside. There were 12 teams ,
2 classes ours was in the micro-sailboat class which is up to
1.5m long, 7 teams in our class, 5 teams in bigger boats up to 5m
tending to be from Spain and Portugal as large boats.
The competition has been going for a number of years, moving
each year. 2015 it was ? islands between Finland and Sweden
and 2017 will be in Norway I think. Theya ske dus if we'd like to
host it in 2017 but we felt we could not arrange that in time.
There are no physical bouys to collide with and we have no
collision detection on board. In the first day they all
sail together . You are allowed momentarily to take control
via r/c to avoid a crash. 5 days of sailing, the first day was for testing,
getting used to local conditions. We discovered that waterproofing is
difficult, electronics and saltwater don't mix well.
The part that switches between automatic control
and the r/c , liuckily we brought 2 of those as the first got
destroyed by saltwater . We aquired some sanitary towels in
Portugal to soak up excess water in the hull, which worked well incidently.
The box duct-tped to the outside of the hull , that contains the
competition's own GPS tracker, for a separate log of the boat
track, so they can score the competition.
The first race was to go round 4 marker s , the quickest to go round
al l4 , would get the highest score. Not for us though.
The second day was station marking, just 1 marker and stay as close
to the marker as possible for 5 minutes. This sounds easy on land
but whan sailing there is no stop for a sailing boat, always blown by
the wind and pushed by the current . THere was a very strong current in this river, we found on the test day. You have to keep moving to stay in
one place, like Alice in Wonderland.
Q: Any detection for current?
No , the boat judges where it should be going and judge it by that.
Q: you don't get it via the gPS system?
You can try and work out I'm not going where I think i'm
going . We did not do that as a lot of other things to do, but you could
in theory pick up some measure of the tide via the GPS.
The third day was a grid search an L-shaped grid of 27 boxes
and we had to get into as many of the boxes as possible.
The fourth day was a collision avoidance day, going back and forth
along a narrow course ajd at some point they towed a line of
big orange buoys across the middle of the course.
The boat had to detect these , swerve round them , then return
to course and continue.
By the 5th race everyone had by now discovered the current is
really strong . At the start the wind was going one way
with the current as well , so very challenging .
Of the 12 boats in that competition 2 boats managed to start
the race and one boat managed to finish .
The boat that started made it past 2 markers but failed to
turn at the third. Our boat was not at all successful.
Partly due to the current . 2 servo motors, one controllint the
sail , one the rudder and on this day we managed to plug the
rudder servo into the sail servo system and vice-versa.
We had a boat that went round and round in beautiful circles ,
wiht a lot of exasperated humands on the bank.
That was something you have no hope of diagnosing from the
computer logs, because the boat is doing everything as it should,
its entirely a hardware problem. As a result of that, we added
some code, so when we start the boat, it wiggles the rudder in a distinctive
pattern and then puts out the sail a few seconds and then all
the way in for a few seconds. We never made that mistake again, but we
did make other mistakes.
A whiteboard i nthe clubhouse with co-ordinates written
on it, lat and longitude. Most of the other boats did much like we did,
lucky for us in the overall scoring. An amazing GPS log of the French boat
that managed to finnish, because it did very tight zigzags all
the way up a few hundred metres up one side, about 1/2 hour ,
then zoomed around the rest of the course.
Day 2 , staying on station, that part of Portugal gets sudden
fogs that turn up from the river. A new meaning for getting data out of the
cloud. We were sitting on the bank, numbers coming in , but we could not
see the boat at all. For the first minute stay in one small circle around the
point and then 5 minutes staying within a circle that contains 95%
of the track. Fiddly to work out, but done by computer.
We managed to get a 25.3m radius , could enough for second place in that
challenge. THe welsh team won that challenge, staying within a 5m radius.
Day 3 , getting into the most squares, we managed a fair few of the
squares and again good enough for second place, the Spanish
team did the best there. We discovered its not a good idea to use the
launch file from yesterday , as it started off in a loop
to where it had been doing the position keeping , the
day before. The large squares were 60x60m divided into 10m
squares. Luckily they allowed us a second go, after our boat
sailed off in the wrong direction, otherwise it would have
been nul point for that.
Day 4 , obstacle avoidance. We got a USB webcam , same as
simple skiping one. Fitted it to the bow, cable running back
to the boat computer . We could look at the bouys beforehand
, so we knew what they would look like. We wrote a simple bit
of computer vision code , that basically just counted how
many pixels were orange. So we had to define the range of
colours for that orange. Then we had to decide the minimum number of
orange pixels before it decided to avoid. For the camera we had brought,
was not suited to outdoors bright Portugese sun , getting very
washed out pics from it, the solution was a trip to a supermarket
for some cheap sunglasses , popped out a lens and selotaped it
over the lens. It worked. The camera was put in a plastic bag to
keep the water off.
It sailed back and forth along 150m course, they towed the bouys into the
path of our boat . We were sitting on the bank, watching the
dashboard , a figure saying not detected, repeating , then just about
as we were to hit the buoy , detected , but unfortunately too late
to swerve out of the way . The proportion of the image , to be
orange , may have been set too high . Also they had told us the
buoys would be in the middle 50m section of the 150m course,
according to their GPS they did , but according to our
GPS the bouys were to one end of the course. Our boat had
just left the area we had set for it to decide whether or not to
swerve. We collided with a buoy. This may not seem good but it
was good enough for us to get first place in that challenge.
As the course was very long and narrow, none of the
other boats managed to stay in the course. It may have been pure luck
that our spot was just as the tide was at low tide and the current was not
pushing the boat.
Q: Perhaps you should have collision avoidance for all times.?
We have to go round bouys at other times and we did not want them
detected by that ststem then. Perhaps we should have made the
observation area more generous. GPS is very consitent with the
same unit but there was questionable GPS reading between our GPS
reading and their GPS reading. After all that we managed to get a win
in our class. If you did not managae a valid run on a given day you got
8 points, 7 boats in a class plus1. First place gets 1 point
, second 2 and so on and the lowest score wins. So getting 3 valid
entries we managed to come in first over all, which we all
were surprised by, as none of us had done this sort of comp[etion
before. We learned a lot doing this, had a lot of fun .
My take-away from this is , reliability beats performance.
That your boat works is better to focus on , than make it work well.
It did not do any of the challenges brilliantly, but it do
all those challenges, which was a lot of the scoring.
Lots of really simple things can go wrong, plug the wrong thing into the
wrong thing, you can use the wrong file misdirecting it, water getting in
because you've not sealed it well enough. We had one day when the boat
was doing something funny, the chase boat went after it , to pick it
out of the water and it was noticeably heavier because it was full of water.
This was a spare time project for all of us, we all work on other things
at the uni. My work is software, programming stuff and this project
brought home to me how challenging it is doing hardware stuff.
A whole new array of things that can go wrong when dealing with
hardware, that otherwise a computer deals with .
We will be doing it again , we returned the boat to the water for the
first time since the competition only a few weeks ago
with lots of ideas on how to make it go better.
Are you using the heel sensor data for anything?
It does get published but its not of much direct use. It does get used
in the same nodes that publish it, because the compass data
Do you have any plans for optimising for the wave condsitions?
Its not something we've done anything with yet, something we wish to
investigate more. Particulrly in choppy waves , a sa small boat
doesn't have much momentum it finds it difficult to tack.
As soon as it turns into the wind it looses momentum , loses
steerage. So we have some ideas for optimising by tacking on
the down slope of the waves.
I was thinking of sailing freer, sails farther out and lower? , to pick up speed before tacking?
We've not thought about that. Making the boat longer would help with this.
A physical solutiion to this sort of problem is often better than
some smart algorithm.
If you truly roboticed it , you'd do as a human would do and
you'd set the sails farther out , to pick up speed, so the VMG?
would be the same possibly as going furthe r, off the wind. This
happens in small human crewed boats?
We don't have very accurate velocities, just based on the GPS.
So we need a way to integrate the GPS sensor to something
loike accelerometers, to work out accurate velocity feedback.
Then that would be possible, a good idea. How to tell if the
sea is choppy or not , maybe a camera system, certainly anothe r
sensor required. THen experimenting between human observations
and trial runs to find correlations.
Do you have different sets of polar diagrams for different
sets of conditions?
When your tacking up wind , how do you decide on lots of short tacks
or longer tacks, based on how far you are waway from your straight line
course, or set distances maybe?
The initial thinking was to detect the ley lines. This is where we
found the changing wind direction makes it trickey. The wind changes
and the boat thinks the ley lines have swung out 90 degrees, so we have
some code that tries to average the wind direction. As we get closer to the
waypoint we are trying to go to, we have a thing called tackvoting?
cuts in , so rathe rthan considering am I past the ley line at this moment
it keeps a 10 second rolling count , sampled every 1/10 second, did I
think I was over the ley line and ready to turn. Once that number
hits 75 then it will turn , which has the nice side effect
then once its doing that, then it wont turn more often than every
7.5 seconds , because you want some gap betweeen your tacks,
to let it build up a bit of speed.
Do you know the French boat did this, lots of little tacks?
They were using a vector-field approach , a vector flow approach. They set
up some kind of virtual obstacle and a point of attraction ,
and between those you can work out optimum fields, and point out the
direction you want to sail. Hence that team doing lots of small
tacks all the way round, artificial potential theory, well accepted in general
robotics control) gives that.
The french boat was in a different class , about 1.6m long ,
super light , which means it can easily tack in difficult
situations. The net effect was their boat stuck much closer to
the line betweeen waypoints.
Do you have a sensor for how much the boat heels over?
Yes, from the accelerometers , sensing gravity. At the moment its
only used to corrdct the compass.
Are you allowd a second remote sensing sytem , in the water to
detect tidal current and transmit that to the boat?
I don't think in the rules there is outlawed remote sensors on the
shore or whatever. I don't think any team are doing that.
We did not have a reliable radio link either.
Could you use a parabolic dish rather than the usual wifi thing?
I don't kow what the internal geometry inside the white box , it
is long range and highl;y directional , but the range was not enough
For your servo systems do you have something more subtle
than the normal proportional control, a suck it and see a marginal
shift , to test out and then back off , someting more sophisticated
as you have a computer on board?
No, the servo control is the standard pulse-width modulation .
One thing we've been thinking about is , a human sailor will
look a tthe sail and if he sees it fluttering , you need to pull
in a bit more . Could we have a vibration sensor mounted
on the sail itself , also measuring the tension in the sheet .
So you are sailing at a specific angle to a relative wind rather
than looking for the point of flapping/luffing point. So sailing at
a conservative 45 degrees say instead?
At the moment just a hard coated? angle , a hard-coated table
of if the relative wind is 90 degree then the sail angle is x
and it adjusts within the angles it knows about.
Did you ever get involved with strategies of stealing other boats
wind or that sort of thing?
No ther ewas 1 day of compete racing scheduled, all
tyhe other challenges were individual boats at a time.
Even at the fleet race, no one was at the point of being
capable of stealing anothe rboats wind, just going in the
right direction was quite enough.
Is it the same challenges each year?
Similar each year but not the same. The organisers a the site get to
organise what the challenges are. The computer vision challenge
with bouys was new last year, replacing a challenge from
the previous year that involved collecting data from added sensors
on the boat.
You know aboout the challenges before the event?
Yes, we could practise them beforehand.
So you would know in advance they would be orange bouys?
We could do calibration with the Go-pro with real
objects on the water a tthe site, to check our coding
recognoises the object.
How many algorithms have you got running?
Each line with a node is one bit running, so 15
to 20 things running .
That is the tasks, but the algorithms to interact , data from
multiple sensors , an algorithm to manage that?
Each sensor has its own thing pulling the data from it, there is
really only one core algorithm that is deciding where to go next
So one algorithm taking all the dat ain and deciding how to
set the sails at any 1 point in time?
Yes, try to go in this heading and the sail control
goes separately, keeping track of the relative wind.
Its smart enough to know it cant go directly into the wind
and different tasks that can switch in, what should the boat be
doing now . A different bit of code for the obstacle
avoidance for example.
They always choose tidal rivers and not nice quiet reservoirs?
The previos year it was in the Baltic Sea,next year will
be in Norway presumably a fiord.
If you don't use the heeling sensor , could you not turn it
90 degrees and use it as a pitching sensor. ?
The accelerometer is 3-axis so we have pitch as well. So pitch
and roll off that , not yaw. It gives 3 acceleration readings and
we convert that to pitch and roll.
How long did you spend on the project?
We started in Jan 2016 and the competition was in Sept.
We had meetings 1 evening a week and occassional weekend day
of working on it or testing it.
Apart from being a bit of fun , is there anything to be learnt
for sailing in general.?
A challenge called Microtransit , a boat smaller than 2.4m
, must be wind-powered to sail from the UK to the USA .
Loads of people try it every year , but no successes yet.
If we got a boat like 2.4m , what can we di with it. We can
collect environmental data , monitor sea levels, check
water quality and waves across the atlantic. Wind energy
is virtually unlimited , no fossil fuel consumed.
We're not advancing humanity at the present stage bu tin
the longer term, we open source all the projects, to anybody
interested . We could monitor fish populations, water quality , that
sort of thing. It would be interesting and cost effective.
All our kit costs are second hand boat about 200 quid and all the
electronics add up to no more than 100, the Raspberry Pi
at 30 quid is hte most expensive bit, all hobbiest sort of stuff.
Are you satisfied with the data from entry-level kit?
We've stared with envy at a much higher quality
accelerometer on display at Ocean Business at the NOC
recently. All sorts of hitec gizmos. A lot of the stuff we are happy
with . We are currently trying to integrate the GPS
and accelerometer so we have a speed reading.
Do all the teams share their data and ideas?, perhaps
binocular or sonar for instance?
One team was doing a sonar thing, not underwater but
ultrasonic in air . Sonar under water there is so many
reflections . Most teams were like us with a camera on the
Do the rules permit wing-masts and hydrofoils?
I think the rule was anything as long as it was powered by the wind.
A wing-mast might be simpler to control than a pair of sails,
a double-sided sail wrapped around the mast?
A couple of teams did wing-sails, so allowed.
A Flechner Rotor type thing that required mechanical power to
rotate the rotor , to then grab wind energy, would not be allowed?
You're allowed a linkage from a wind-capturing something like a
propellor, as long as the only source of motive power is the wind.
Have you any contacts with the big-boy autonomous , huge trading/cargo
sailing ships that are just coming off the drawing boards, multi-mast and
huge sail arrays but just 1 human on board ? Wherever there is reliable
trade winds around the world.?
We must share something in common, in the way of the control
systems , but have no direct contact.
You've not found any use in conformal coatings over the electronic
gizmo boards, just waterproof boxes?
We did use Plastidip on some of the electronic boards that gives it a
kind of waterproof coating. More recently we were told of stuff called
Magic-Gel. You put your electronics inside a box , fill it with t he
gel , goes solid and is not conductive. Its a bit like Argo-Floats and
immersing all the electronics in oil , so nowhere for the water to get to.
Wondered if you hada problem with consendation as much as seawater
We've not had condensation problems.
Do the organisers allow you to see their GPS system beforehand,
as you said yours and theirs were different? A fixed offset all the
time or varying?
There was nothing secret about their GPS. We didn't get to
look into it as their boxes were taped shut. As there was something like
30m difference between the two in the collision avoidance
challenge, it may be sensible to place their system and ours in
one position , prior to the race next year, to check for any offset.
Were the grids the same for the different classes?
I think the bigger classes had bigger grids to search, 20x20m grids ,
we had 10x10m boxes.
We're the bigger boats better at the tasks?
In some tasks yes, not necessarily due to the size of the boats.
Teams bringing bigger boats were possibly better resourced,
more experienced , like the French team.
Generally the bigger boats wwere better at picking up speed
before tacking and the speed is relativw to the boat size.
The Froude number is much larger for the larger boats.
What are the challenges to get one of these such boats to cross the
Atlantic, just funding for a more robust boat or?
Getting a tiny boat cross the sea has many problems. We know of
a boat being kept by some fishermen, another attacked by a shark.
Some the servos did not last even 24 hours, because of severe sea
conditions. Waves of 7 or 8m with a boat that is only 2m , not nice.
There isa team near London that launched a mircotransit
attempt from this area , got into the channel but with the tides and
things it never got out of the channel, just being pushed back
and forth, and eventually washed up on shore.
You need the endurance of power for the computers as well.
We currently use a USB power-bank that would otherwise be used to
boost a mobile-phone power, works well. Alsoa set of AA batteries for the
servos. The Microtransit boys have solar panels on theirs and
batteries so it doesn't die in the night.
Just the ability to keep going without stuff breaking ,
for the lenght of time involved and make headway against wind and tide
and big waves. A number of teams start out each year, west and east
going across the Atlantic and so far no success.
How does the robotic control fare against human control, say via
When everything is going smoothly , then the robotic is comparable to
someone without much experience of r/c sailing. A good r/c sailor
could always beat our boat.
Monday 12 June , Dr Roeland de Kat, Soton Uni : Forces and
turbulence in avian flight .
27 people, 1.5 hours
Over about 10years I've done bits and pieces on avian flight.
Today I'll talk on forces and turbulence. A lot of this work
has been done with David Lanthing? who now has his own lab
at Stanford. I'll squeeze in some Par-avian flight and finish
with avian turbulence. The main reason I'm into this , is because these
little creatures are amazing. You see them flash by and you don't fully
appreciate what is going on . I spend a day with a high-speed camera
chasing gulls on Southampton Common. A 400 fps video of one in slo-mo,
flares off, stops in mid-air, drops down seeing something I could
not see , takes a fish in the beak and goes straight up and out.
A lot of things going on there and a lot is beyond my expertise,
thata why we need to work with different types of people.
My background is aero-space enginering and so I can figure out
some of the elements of its flight. A pic of a swift, amazing
fliers. David Lanthing picked the swift because of its intermittant
flapping and level flight. As soon as you see something flapping,
engineers and biologists say thats way too difficult.
So we need to know a bit more about what is happening.
One thing David observed, in seeing them fly , they change their
wings, have them spread out or swept back. I was doing my
masters in Delft and David asked me to work on this, swift flight.
So we looked at morphing wings, how they control the glide
performance of swifts. What are the forces that act on the wings
, how the forces change as they change wing shap[e and what does that
mean for flight. We both had engineering backgrounds is the gait?,
thats not right 5 degrees, 50 deg , its meant to be 0 degrees,
60 degrees. But somewhere in the process of removing the
body of the bird, freeze-drying the wings, the wings did something by
themselves. When we put the wings in the freeze drier , they thought they
had them at 0, 15,30,45,60 degrees, but we needed to quantify it.
So part of the wrist , the sweep angle and that changes. We don't always get
what we want. A colleague a true biologist , we said to her this is not
true 0 degrees , she said its within 15%, that is biological variation it
explains everything. We went about quantifying different sweep
angles and a few other things we care about when talking
about aircraft and flight. The aspect ratio , generally linked to
how efficiently the wing performs and the wing area as the bigger then the
more lift produced. Classically these are just parameters; you pick,
you set and then forget about it and design your aircraft. Our flight
machine includes multple wing areas , multiple aspect ratios .
The first tests were forces, classically drag and lift , we want he
highest lift possible for the lowest drag. If you want to compare
a swept back wing to a straight wing , one bird , one wing ,
can change it. So instead of going at it , like an engineer and
normalising everything into non-dimensional things , we need to
take into account , it can change its wing area. If you put the
wing area back into the equation , then you see the differences between
the different stances becomes much larger. The envelope
plot , going from wings straight all the way to swept back.
If you change the flow velocity , keeping the medium (what is
flown through) the same, keeping the size the same , change of
velocity changes the Reynaulds Number, the paramete rthat tells
you how difficult it is to deal with the flow. The higher the Reynolds number,
the more complex the flow gets. The lower the number , less complex the
flow gets. If you add particles it may get more complex again ,
one of my colleagues work there.
Adding this in , we have to take into account , this occurs in a flow
regime , where things change. They change from lamina to turbulent .
If we add those different velocities int play, 5metres per s to 30mps, cranking
up the wind in the wind tunnel, the dashed envelope line , changes
further. We can scan the parameter space of what we expected these
birds could do. A bunch of numbers, that don't necessarily
mean anything . We took those numbers and put them into a glide
model. They are flying in different poses and we know trhey
are fully balanced. We have the lift and the drag from our
equations , an estimate for the weight , then say if it flew with this
velocity what can it do, and at a different velocity what it can do.
Not just a glide but what if it flies in a spiral. Swifts swirling across
streets , they glide and like to turn rapidly as well, quickly
changing their sweep angle.
Generally in aerospace engineering , we ignore a few things. Generally
we say gamma is small , a small angle, so we can neglect a whole load
of terms. But we needed this , to fully describe bird flight.
Equations , looking at performance indicators, how far forwards can it fly
with a 1m drop. Or 1m down , what is the slowest I can go down. The sink
velocity , the glide ratio. Maybe it wants to escape a predator, maybe you
want the ground speed to be the highest. Then some turning velocities
and performances as well. Whats the largest turning angle we can
get per m of descent, what is the tightest turn we can make , wha tis the
quickest we can turn. Below 45 degrees , gacefully falling , about 45 degrees
gliding ? flight. An easy cut-off , 45 degrees. Most follow the
same trend efficency peak at the lower velocities. Anything more on
efficacies or group-power can peak at the higher velocities.
Looking at the glide ratio, we can already see a few interesting things.
Peak is exactly where we expected. At undergrad level aerospace , for
straight wing, highest aspect ratio , is the best. Glide ratio of
about 11 . Estimates for albatroses don't go much higher , 15 or so.
So a pretty good glider. As we increase velocity , straight wing is
not the best any more. Initially this was a surprise, why is that happening.
Looking a bit closer into this, that is what we could expect.
If you take into account , the area of the wing, wing area changes.
Return to the curve with high lift, low drag, that needs to balance with
its weight in flight. Increase velocity , the coefficient goes down ,
poorer performance. As we go to swept back wings , performance gets
better again . Performance improves at higher velocity
purely due to the area change. Change your area, you can stay at the
better performing part.
Q: Does the angle of the wing vary over the wingspan as well, in this change?
Likely. But thats an additional challenge, not included in this
presentation. We looked at the deflection of the wings, from the
rear, at different velocities and they do deflect a lot. Lots of pics show
that swift wings a pretty well planks as wings , not a lot of
twist present. With our prepared wings, the twist was not
big enough to quantify.
Q: When you say whan the aspect ratio is very high , it changes the
wing area ?
Wing area plays role as well
Q: But on an aircraft , wing area is always the same, irrespective
of sweeping the wings back , so why is this different?
THe feathers overlap , when they start overlapping more , the
area changes. The benefit is, instead of having the small
difference between straight and swept back wing, here there is a huge
difference because area plays a role. At equilibrium balance at about
7mps, if it goes faster and faster , there isa curve that says
, equilibrium is lift^2 + drag^2 = weight^2. When that moves down ,
it shows how swept back wings perform better at higher velocities.
But does it really fly at those speeds. We tok data from a
different study Beckman & Alistahn? , the most probable flight
velocities, the most observed flight velocities.
The velocity swifts are recoreded at , fall in this range , and fall
in the range of all the efficiency parameters, not the efficacy parameters.
A lot of measurements , a lot of observing a wing in a wind tunnel
doing nothing at all, for about 2.5 weeks, 9am to 3am, very
tiring . Now for the vortices. If we take these wings , placed in a
wind tunnel, what I found intriguing , sold me on it, we have
leading-edge vortices. Before I started my internship 2004, showed
on a model wing with vortices. But engineers would say in the
1950s we had such vortices on swept back wings. Something else with
wings may have a role, they are porous. So do the LEVs make the
wing more efficient. We need a way to capture them and measure them.
So took a tin can, cut a hole, weld something into it, take a cigar,
put it in , high pressure air added . have a rake on the other end,
hold the rake in front of the wing, puff the smoke and you can see
the vortex. That failed miserably because cigar smoke is very
moist , a lot of tar, so imediately clogged up the tubing.
Trying loads of things, took a tuft of my hair and we used that to
visualise the flow. If you see rotation, then the flow is pushing my
hair around, ie a vortex. You can follow it into the tip vortex ,
the key is it started turning in the position of a LEV.
So we logged , moving of the hair around and whether we saw
rotation or not. Pictures of where there was a cone and not a cone,
and showing , for the rake used, it does not create it by that and what
we were doing. So there were LEVs but not present in any of the cases
where there was peak efficiency. So wherever the swift flies most
often , there is no LEVs. Where it did show up and where 1950s
engineers designed it around as well , you have increased efficacy.
A peak lift that you can create , you can turn very quickly .
If in a dog-fight or chasing insects , and the target goes off in one direction
, you need to go after it. Thats where it comes into use, where they use their
LEVs. We've only touched the tip of the iceberg of research
into bird flight. A lot of current research into capturing what living things
do , and build them ourselves.
I moved to Soton, to look at turbulent boundary layers and develop
experimental techniques. The first thing I looked at was a feathered
dinosaur. So a small dinosaur and similar approach to the
swift research. Take a model, place in a wind tunnel , get forces,
make predictions as to what it could do.
From the fossil record, such creatures are flattened out, feather
material . So a crow of its day, irridescent feathers , but could it
fly, how well did it fly. Some previous researches CL of 1
sounds good , glide ratio of 15 sounds good, combine and a very
good flier. So a colleague Colin Palmer found a pigeon in his yard,
bought a duck and created a model. A long tail, feathers on the
legs, and feathers on the wings. Whats this with hte legs.
Paleantologists bamboozled, perhaps everything was spread out.
Some people whan young can put their feet behind their head,
projected reasonable extremes, given the bones, what it could do.
Legs sprawleded or legs down and asked does it fly.
We put it upside down , because the balance in the wind tunnel is
at the top . 2.5 x 1.5 m with the animal in there of about .6m span.
Something at the rear thtat pushes it up and down, changes the
angle of attack , little weights and servos to capture the forces.
Lift and drag as before , but this time we add the moment.
With centre of gravity and moment at non-zero , it will
rotate. We wanted to make surre that whatever we say it can do,
as it doesn't turn like a leaf and roll or flutter down.
So the moment about the cofg needs to be 0.
We tried to measur ethe moment with the earlier swift work
but we failed. Luckily there are plenty of accounts that
swifts can fly and do so without their tails spread, in the vast majority
of poses. For swifts we could ignore the tail and it was fine, the findings
not affected by not having a moment.
Other researchers avoided this also, saying there were various points
it could fly. We accounted for it , with speed specific dynamic force
, which is basically the total force and then splitting into lift and drag.
The glide ratio, speed specific moment, and regions where it could fly and
some areas where it is not stable. If it moves up, makes the moment
larger , keps moving up and you get a confetti effect. To fly there
it would need a big brain, which is up for debate.
Elsewhere it is stable , not requiring a brain, and just jump out
of a tree and fly.
So jump out of a tree, make your pose and see what happens.
With fairly simple assumptions you get to glide paths. Initially showing
legs down is clearly better than legs sprawled. Then the engineer comes in
and maybe it could move its arms back and forth, so we need to
account for that. We modelled that by saying it could move its cofg
wrt the lifting surfaces. That tilts/ shifts the moment up and down .
As the big/small brain debate is still ongoing, we just say there is an
unstable part and a stable part. If small brain and not a good
flier , it can fly in that section. With big brain, advanced controls ,
it can fly there as well. Stable areas, no thinking required, areas where
it could glide but needs to work hard. Compare the 2 plots and
its not that different. Jump out of a 30m tree , glide-path, if you want to go
farther with legs sprawled, we think it hasa bigger liufting surface.
Go to the side, with the wing feathers, it goes up.
I started loking at turbulent boundary layers TBL on bird wings.
A flat plate is boring com[ared to a bird wing. We thought
behind this was some meaning as to how flight evolved in
feathered creatures. A wing is not a flat plate, the feathers overlap,
and in the overlapping there is a roughness. Feathers are not all
smooth or not flat at the top. To get a good force out of an
aerofoil , first year students are told it needs to be flat, a sealed
plane, flattness 0.000 something % flatness. So how does the
roughness work here. The lakidys? with the veins around it.
Shone a laser with lens in front, take a picture and get a cross-section.
Fairly smooth and as we move outwards, gradually gets more
corrugated, looks more like a dragonfly wing than bird wing.
They fly much faster than dragonflies , a different flow
regime. Measure them, abstract the average curvature , because it
has nothing to do with the surface roughness, colour code the
height , then compare that to the average chord, about 37.5mm
, peak to peak it is 0.8mm , 2% of the chord instead of those zeros something %.
So very rough, it must have an effect on the flow, must be fully
turbulent. So wing in the wind tunnel , to find where it is
turbulent flow and where it is lamina flow, skip the bit in the
midddle as too difficult to deal with, but there is something other than
those 2 flow regimes. If we used a hot-wire as for measuring velocities,
it would probably cut the wing. Use smoke and you might create
a wet wing, very different to dry birds. The best thin we came up
with was a microphone, with a very long tube on it. Build it right
and its only sensitive to pressure fluctuations at the tip of the tube.
Traversing the wing , patches of single pitch
noise , not turbulent. The hissing sound is lamina separation .
So we move our listening tube along the wing to find whare the sound
changes. It does not change randomly as turbulence is
quite well defined i na broad band signal. The tonal noise we were not
sure what to with that , I'll come to. Put it in Fourier transform
analyser and you getr what frequencies in there, high low, broad or nothing.
You look for where there is extreme change, that is where we go
from lamina to turbulent. Mark with dots where the changes are, do the same
thing for 4 angles of attack, times 3 wings. We take 0 , half way and
maximum lift to drag, which is whare its interesting. Also where there
is maximum lift , or stall , whare you'd expect changes to happen.
So we locate where the changes are, but thats not where the
roughness is. Where the roughness is, no transition .
Everyone tells you , going through aeronautics, wheere its rough
there is transition to turbulent flow, the worst thing you can do to your
aircraft. Swifts don't care about this , they just do their own thing.
Flow is not changing instantaneously , turbulence is not
something , clicks over now lamina, now turbulent.
There is a transition process, in one place with a certain Reynaulds
Number, transition may only occur later on.
So an area with lamina flow , and whaet
that means for the bird.
For different angles , its primarily lamina. The peak is almost 75%
lamian , or at least non-turbulent, where it wiull perform its
best. Thats where it flies most , unsurprisingly. How do we
know its not just a thing with swifts. How do we know its the
roughnes, by testing. The swift wing , with calipers trying to find
where the ridges and valleys are, then I used a laser scan.
We built a wing with thin pieces of tape attached. It has one
width and tibs and one without. We used the listening tube
again on this. The rough has more lamina area than the smooth,
low reynolds numbers. As we increase velocity we get
back to normal. Big aircraft with big Reynolds numbers,
flying fast. Aswift isa small bird , fast in bird terms but slow
in airliner terms. Right about the range where these birds fly ,
rough wings are not bad, in terms of lamina area.
What does that mean in terms of performance. Mor elamina
area means better performance, not necessarily.
Our wing, lower reynolds we do get bettewr performance.
What is going on in the flow, we still needed an answer.
A masters student in Delft took the roughest lines measured
profiles of the wing, averaged them , measured under a microscope
the leading edge radius of one of the feathers , estimated the thickness,
to produce a 3D printed model. Then remove the roughness, make a smooth model, 3D print it . Now anything you machined
or 3D modelled will be as you intended it to be.
Its not far off. Placed in a water tunnel. Luckily a low velocities
, air behaves as water , as long as you match the reynolds numbers.
With water, the forces are 4 to 5 times higher, so easier to measure,
or so we thought. 3 cameras looking at it, placed a load of
particles in there , watch where they go and we get velocit y
fields. We tested 3 angles of attack, for 4 different reynolds numbers
and the range where we expected changes to happen.
Not much obvious differences. So w etook snapshots , summin g
and dividing by n . Looking at vorticity, so
rotating 1 way or the other, or shear. Low angles of attack nothing
there, intermediate angles a little hint of something there,
high angle of attack , definitely something going on .
Vorticity is not too natural a thing to look at. We zoomed into
small area and looked at vector representations, much easier to
interpret. The vortices that get induced are the
tonal bursts you heard on the video recording.
Vortices move , they're periodic. In the global view we might not see
it. In aeronautics we reduce it to a bunch of numbers, describing what the
flow is doing. Boundary layer grows as it goes over an aerofoil , it has a shape
and it can be quantified in multiple ways. Generally we pick
boundary layer thickness, about 99% of the external flow.
Then we can determine how much velocity you would need , rathe rthan
a curve , you make it a straight line, the displacement thickness.
Then you can work on how much energy is lost.
With the boundary layers , you get an inflection , a separated
flow , flow is not attached for poor performance.
There ar efluctuations around the average profile. Boundary layer
thickness. not much difference rough or smooth. The shape
factor , one of the indicators of whether flow is turbulent
, lamina or separated. The bigger wake is generally
indicative of a separated regio n as well .
For the rough, we get vortex trains for intermediate
angles of attack where peak performance is.
Low angles of attack, no vortices , lamina flow over a rough
wing. High reynolds nubers, they both go turbulent.
For high angles of attack , smooth and rough wings are little different.
This is where a swift wing performs and a smooth wing does not.
The beginning of the wing, kicks the flow , it gets agitated and stays
attached. It does not separate , it follows the wing.
Although turbulent flow is bad in causing drag, its a lot
better than having a separated flow as that would mean
I have a PhD student also interested in feathers but he comes at it
from a different direction , as a biologist. So we loooked inside
feathers. Taking a swan feather to pieces , placed in a
syncratron a large particle accelerator . The particles shoot through
the feather and a scintilator , turning radiation into
light and look at it with a microscope. Move around 40
times. The core has a patterning , but move to
the outside no patterning. The material properties change.
Also there are multiple layers in a feather. So beyond
avian aerodynamics , one of the most
complex advanced composite structures i nthe world.
So the next research is trying to determine what the layers are, which way
the fibres are pointing . We could then model the composite, tear it
apart again, and see if we can tell something about it.
Hopefully allow us to build better structures, maybe improve
small-scale flying objects, wind-turbines or whatever.
You said the wings were porous , so with bats where its just a membrane,
would htat behaviour be more like conventional aerodynamics. ?
Feathers are porous but for most lifting purposes
We try to model the porosity , and we failed. Wheras swift wings
use roughness to keep the flow and keep the flow
attached, what membranes might do , is changing the camber, the
curvature and more lift, but it vibrates more.
With a vibrating membrane, it does the same thing as roughness,
energises the flow, it creates vortices , keep the flow
attached. A completely different phenomena but the effect is the
How much air goes through the feathers.?
There are people that look at the permitivity, its very little.
The latest model I've seen , they've tried to 3D print .
From the bones you have the feathers, from the radius? you get the
barbs. Optically it would seem to be porous but when you pressurise
it , it tends to flap and close, and closes. People apply pressure
differences , to see how much goes through.
They try to model that in 3D printed wings, by creating simple
holes, and they are not the answer ,as the flow goes through,
resulting in fully separated flow. There is some work on the outermost
primaries of storks,where there isa hole and the rest seems closed
, if they close that gap with wax , the feather which is a single aerofoil.
performs worse. Little holes may have jets emerging, that may keep the flow
attached on top of the roughness. But that varies, species to species and so
many species out there, its difficult to say, this is how feathers work.
There is likely some flow coming through, its too small for us to
With the albatros , a high aspect ratio?
Its about the same as the swift.
Can the albatros change the shape of its wing, for more speed?
The albatros has a little ligament in there, places it, and it locks.
Without any effort it remains straight. Swifts have loose wings, and they
have to force it. Different species have different mechanical
solutions built in, to help with their flight behaviour.
The albatros flies much faster than the swift, bigger ,so its in a place
where it will not benefit from a rough wing.
A barn owl flew in front and across me , one dark night, with a
wingtip just 1 or 2 feet from my head, and I did not hear a thing,
whats going on there , a very downy wing surface?
Multiple things, the easiest one, its very slow.
Slower means less drag nd so less sound. The second is flexible
wings, the feathers are less rigid than other birds, so
creates less or lower tone. The design is such that it gets the sound
generated but out of the range of hearing by its prey.
There is sound , just that its not audible to humans. There are
pressure fluctuations, but they're not audible.
It has a velvety surface, the precise texture is difficult to say.
May have some porosity ads well , so the pressure passes through
it rather than creates sound. It has a large wing and so a low
So infrasound or just sound below our hearing range?
Found a journal article, "Features of owl wings that promote
silent flight". They say the shape changes, wing area is large
compared to the overal bird. There is a little serration at the
leading edge, that probably creates vortices that keep the flow
attached , as separated flow makes sound, and any unsteady
flow makes sound. Another article measuring the sound from a
wing at different frequencies. One of the influences is the
comb, the serrations, of the leading edge. A lot of people
are interested in owls as they desire to reduce noise in other
areas of flight. They use an acoustic array, a bunch of
microphones and arrange flypasts. With some maths you
can reconstruct where the sound may have come from
at a particular frequency. Repaeated with different types
of birds. There was a BBC doc on the silent flight of owls,
where they compared 3 of them flying around.
Everything in biology is designed towards a goal, but its always a
trade-off against other things like mechanical structure .
Do you feel the roughness of wings is an exploitation of the
fact they've not managed to evolve smooth wings or do you
think it was very deliberate selected for the ideal shape or some
combination in between?
In the water tunnel experiments the forces were comparable
between smooth and rough wings. So if you don';t have a penalty
for hving a rough wing or in some cases it may be beneficial
for performance, then its always better to have structural elements
that have some bulk , in the direction of loading , rather than a flat plate.
So probably why little bird-like fliers, millions of years ago
might have taken advantage of having something strong enough
, but still bulky , without losing too much aerodynamic performance.
Its not necessarily driven to that goal as the main thing that
drives evolution is where is my dinner and I need a mate.
When you can satisfy those 2 , then you're good enough, or maybe
escape predators. That does not mean it drives for perfection ,
drives towards suitability for its environment.
One thing that peaks the swift , is they are pretty much always on
the wing, eat, sleep and have sex o nthe wing. Only landing to
sit on eggs or feed the offspring. After first taking off they fly for about
3 years before finding a nest.
How do they sleep?
Like dolphins , brain one side off , one side on
On the opposite side to noise , one thing I've admired about
swifts and swallows is their ability to just turn on a sixpense i nflight.
The onl;y fixed wing aircraft I can think of that does anyyhing like it
is the typhoon and even that cant turn as quickly, for the speed its
going at. Your research on angleof attack and roughness and
sweep-back , does that account for the fact they can turn so swiftly
at its speed or is there something else going on? eg can it stall
one wing and produce maximum lift on the other, in order to
turn as quickly as we see it do?
It might be able to do that, but it might be able to do
better than that. Someone buit a robo-swift , mimmicking what it
does. It uses asymetric control and move. I-morph / Bluebear? systems
, uses a similar structure to move the whole wing, what they don't change
is turning one up and one down and turn like a corkscrew.
So fly the plane sideways . If you end up turning on your own axis then
you do have to use fifferent angles of attack . What they seem to do is
swept back in dive , flare-out completely, catch up a bit , tail full
out to keep control and then continue. So if an insect made a basic
manoeuvre it could pounce on.
Some of turning-on-a-sixpence may be an illusion because they are moving
really fast, take it out, and may be more of the curvature of a football.
Its difficult to tell at a distance, we just see it turns.
When tohe paper on leading edge vortices came out , 2 of my
colleagues wrote a commentary, turning on a dime. Instead of comparing
with the typhoon they compared to the F14. The F14 has very different
reasons to sweep its wings, than swifts.
The hole in the stork wing, is the hole intentionally there?
Slotted wingtips with spread out feathers , you can treat
them as single aerofoils , rather than needing to worry about
how much they overlap . Take one feather, place in a wind tunnel
and see what happens. A little hole just before the turn .
They tested by closing those holes with wax and find the difference.
When there are holes in the feather , it performs better.
So it comes down to , a bit of passing air might be beneficial.
Its not that dissimilar to when we take off or land.
Take all the slats and flaps out, there is more air going through .
Probably tested at one speed only. Not just wind tunnel tests
but stuck it on a car and drove at thre correct speed, to make sure
the ? speed turbulance was not affecting the result.
The reason behind that is sound, even if it sounds odd.
Have you looked at commercial applications of your research?
What is of interest right now is the morphing wings,
lots of different people. My interest is not necessarily
commercialisation , just figuring out what is going
on. With morphing wings we know it performs slightly better.
It can do multiple functions at the same time, but the best design
of how to malke it better , we have a masters student looking into
right now. What strategy of morphing wings is beneficial for
performance. Also the composite structure has potentials.
I've heard the albatros is the most efficient animal in the world,
in some sort of terms. Is there some sort of a ratio between what
birds have achieved and the most efficient human wing ever developed. ?
Roughly speaking, the faster you go , staying below mach trans-sonic range,
the more efficient wings become. Humans can make wings that go much faster than birds
so they are more efficient.
Its a bit comparing lemons and pears. A human built plane will
weigh more than a bird, but for an equivalent speed to weight
ratio , how are we doing in comparison, to evolution?
Go down the scale from birds and bats, to insects. Everyone is
raving about flapping wings. Except they forget one thing, did you
try swimming in molasses. They need to do that , as there is no other way they
can fly. It doesn't mean its efficient , it does mean its the only
way to fly. For biology , sometimes, its not a matter of flying the
most efficient , its flying enough to get you eels . Flying enough to
have the edge. For the microraptor it is probably the equivalent
of a flying squirrel today, just flies from one tree to the next.
I notice bird wings seem to shed water with remarkable efficiency,
water off a ducks back?
Depends on the species. Owls can't fly in rain.
The wings don't function because they get water logged.
The cormorant needs to dry its wings , to fly well.
Some underwater flying birds don't have oil, don't preen
the feathers with oil, because it makes them too bouyant.
In bad wether owls are grounded. Probably due to
extra weight and destroying flight characteristics.
They have to wait until it gets dry. In dutch we call them
church owls because we first noticed them appearing
in churches rather than barns.
Birds fly slow compared to things we make , considering
dimensionless numbers, is there something that charactises the
different regimes slow flight v fast flight . Do wind-turbines come
under the heading of slow flight or are they fast?
To characterise what the flow does, is the Reynolds number,
has the density, the velocity , a length scale divided by viscosity.
Viscosity is water v honey . The bigger it gets, the higher the
Reynolds number , the more chaotic the flow becomes .
Or in terms of TBL the reynolds number is the ratio beteween the
largest scale and the smallest scale, how complicated the flow gets.
So fly very fast if very small and still have the same
Reynolds Number. Go very slow and be big and have the same Reynolds
Number. So take a F1 car , scale it down , put it in a wind tunnel
, you need to run the tunnel faster , to match the flow conditions.
So swifts are at the wrong end as they are small and slow ,
but luckily there is a lot of interest in micro-air vehicles,
drones , UAVs .
Monday 10 July 2017, Dr Tony Curran, Soton Uni : The Carbon Footprint of Food
36 people, 2 hours, audience interactive competition sections not transcribed
The burger apocolypse. A graph , on the x axis how many C tons you can
potentially save by doing different interventions in your lifestyle, to reduce
your impact on the environment.
so changing your diet, saving about 2 tons of your C arbon footprint (CF).
The average CF is 15 tons per year in the UK. The y axis is how much
money you could save. Its more than any other intervention like changing your
transport , how hot your house is.
What would be your perfect burger, typically its a beefburger, by
sales anyway. The Heart-Attack Grill , in the USA, their slogan is
taste worth dying for. They use it as part of their marketing, that
people regularly get hospitalised , directly from their restaurant
because f the food they eat there. They have the bypass burger,
very big. The double bypass burger , the triple and quadruple bypass
burger, 10,000 calories, 4 to 5 days foodworth for the average human.
This is Las Vegas ,sin city, so they now do the quintuple burger and up
to the octuple burger. If you want bacon on it , not a couple of rashers but
40 added to it. When you go in you get weighed and if you weigh in at
over 25 stone or 160Kg you eat for free, the American dream.
Its become part of the UK culture, steak nights or burger challenges
and overconsume especially meat . It didn'y used to be this way,
now its trendy. Leading to a lot of negativ econsequences, both for the
environment and human health and also financially.
The environmental argument. Beef cattle like lamb, are ruminant
animals, the digestion process means they generate a lot of
methane. About 34 times as potent a greenhouse gas as CO2.
It also takes up lots of land and lots of water, about 70% of
all the water we use in the world is for farming .
in terms of land about 3/4 of all deforestation is driven
by forest clearing for soya production often, to feed to cattle.
A graphic of the weight of all the animals on the earth, section
for all the humans, much larger section for al the cattle we keep
for our consumption. Also the lambs pigs,horses and then marginalised
is all the wild animals. We've pushed out wild animals by monopolising the
earth for our own ends. Maybe we could reduce how much beef
we eat. There are some trends to move to a lower meat diet, compared to
2006 beef consumption will go up by 95% in 2015, largely due to
a more affluent China.
Businesses and governments habve a roll to play but individuals as
consumers have a bigger roll. So the ABC of low-C eating.
A = Avoid wasting food, about 1/3 of all food produced is wasted.
That is about the same in the UK or globally, total food waste, in the home and supermarkets, about
7 million tons pe ryear in the UK. Globally 1.3 billion tons yearly.
How much do food safety laws impact on that statistic, sell before dates etc,
or the seller is liable to legal proceedings?
Yes directly and also indirectly as people will throw away food that is
perfectly healthy, just because of the date on the packet, I'll return to this topic.
most of the food is wasted beyond the retail point, mor ein the home than
in manufacture or farms or supermarkets, about 20% wasted in the home.
So if we go to a supermarket and buy 5 bags of shopping we effectively
chuck one of those bags into the waste bin.
Half a ton of CO2 equivalent for the food that is wasted. About 28%
of all the agricultural land is used to grow the food that we throw into the
bin. Considering we hope to feed 9million people in the next few decades.
Again huge amounts of water used on this thrown away food. Valued at somethong
like 5 billion GBP per year.
B= Buy in season food
The CF of a lot of fruit and veg can be 10 times higher i nthe off-season.
A lot of people say buy local, but the research I've done is whether it is
in seasoon is more important for CF .
Take the example of bananas, come from 5,000 miles away , but tens of
millions on a single ship and so lowC. The same with oranges from
Spain. So both are healthy things to eat all year round. Don't tell
people , not to eat them, just because they are not local.
Spin that on its head and consider strawberries, between April and sept
are in season . Fine to eat, lowCF, cheaper , tastier and perhaps more nutritious then. Out of season probably hot-house grown
in Kent, using artificial heat , so way higher CF than bananas from
5000 miles away.
April to Sept is a very long season, I live in a strawberry growing
area . Growing them under plastic in April and Sept monthes must be a
lot more expensive than just eating them in June and July?
If you use polytunnels , yes it would add something to the CF, but
little compared to artificial heating. but it will extend the season,
the same with tomatoes.
Conversely take the example of asparagus. It is atiny growing season
about mid april to end of june. A week ago i na supermarket , some was still
from the UK. Go now onwards , all the varieties of it you see all the
way to april next year , they will be grown in Peru.
Because it perishes quickly , it will be flown in , so the CF out of season
is about 30 times higher. Enjoy them in season, then eat something else,
there is always something else in season.
What if the season was not july, i've constructed a food
seasonality chart, along the term each month of the year.
Look down and see what fruits and what veg are in season in that
month. Place the chart, downloadable from
on your fridge, also the interactive games are on that site .
C= Choose low CF food more
Its difficult to know , often , which foods are high and which low CF.
Between 20 and 30% of all greenhouse gas emissions are in growing
our food or in the food system. Its the area we can make most savings
, relatively easy, without masssive lifestyle changes.
70% o fall the fresh water we use , is for growing our food
75% of all the deforestation is driven by land clearance for
agriculture. So if we change the kind of foods we eat, we don't waste it,
then we can make big inroads into reducing these numbers.
For many years I've been involved with economic developement in
the third world. They earn a lot of money exporting products to the
UK. So if we moved to a more basic lifestyle, cutting back on imported
food, grown in the third world, we are reducing their economic take.
How do you balance those 2 things. ?
There is no simple answer . Its much more general than just
food. We are a global economy now and such things will have
consrewquences. We can reduce our effects on the environment is
be a bit more local, in our production. In my ABC
, biased towards seasonality rather than buy local . If we have a
big transport footprint , millions of bananas on a boat , which
will be producing greenhouse gases, more so with flying.
Articulated green trucks , in the future, interesting to see how
that develops . In the near future thats not a solution.
We should not eat the same foods throughout the year , but have a seasonal
diet and then local products?
If we go back 100 years, or 70 years, wartime, nothing
was wasted. Most of it was local and self sustaining. People grew their
own food out of need, and was a low CF lifestyle.
Wouldn't it be great if we had urban food growth, self sufficiency
i nthe local economy, it would be low CF, the greem ethic.
Thats true and would be nice if we had it, but its not the
reality. We're moving to 9 billion people, becoming more urbanised
population . We don't have the space and most people don't have the
inclination to grow their own food. Hence we will still be
dependent on a marlket system. We have to be carfeul how we do
that and mindful of the impact on other places. To some extent there will
be a local element and that must have an effect on other areas.
It does not have to be too big an impact as I say seasonality
is the primary changer. Bad working conditions in banana plantations
is an issue that must be tackled , but it vcan be a healthy
source of lowC food. Lets not cut that off , just because its not local.
Balancing the ? , factory farming produces much less CO2 , but then
it also uses antibiotics , terrible animal ethics . Organic farming
uses much more land. Mor ewater and more CO2 produced, otherwise if
you want to eat meat. ?
There is no right answer , you cant tell someone where their
priorities should lie . A lot of people going vegetarian or vegan
, do so for ethical reasons and others for environmental reasons.
Its true, if you mass produce , especially something like chicken for
factory farming or for eggs. Or cattle in horrible indoor conditions,
that is unethical and low animal welfare. But the CO2 is lower,
so where do your priorities lie. If you want a lowC diet and want to eat meat
then it would be better to go down the unethical route. The if at the
end of your question, we can eat less meat. Everything in moderation thing,
eat meat less frequently then you could say you will eat free-range meat
, at a sustainable level, if you consume at sufficiently low quantities.
I give talks on energy and general consump[tion and I always have to say
to people, reduce our CF and we'll be back into the stone-age.
We want to reduce it to the level that is sustainable
, stil 5 to 7 tons of CO2 per person, but its not 15 tons.
Get your overal CF down to that level , could be via lower
meat consumption that is ethically produced.
Theres a technology that may well be coming in soon, where on
brownfield sites around cities, they'll be putting converted
shipping containers and hydroponics inside to grow salad crops.
I can't fathom whether that is advantageous in CO2 terms
or the present large-scale growing and large transport costs?
The scientific answer is you have to do a life-cycle assesment
of this option compared to the current and see which comes out
better. Its comparing apples and oranges. There is this argument of a
move to locally produced food and in theory it can be sustainable.
Lots of examples around the world of urban gardening or urban
farms . The community has risen up , we will not be dependent
on food coming in from other places, unknown inclusions, unknown
effects . Help to produce it, pick it when you want. I'm synical
myself that they can rack up to feeding anywhere near
65 million in the UK, potentially to 70 plus in a decade or so.
A dreadful cycle, we must feed this lot , more children, more
food required, where is the limit?. If you are not prepared
to declare a boundary then there is an infinite line needing food?
Always touchy, what can you do about population growth.
It would be a really good way to overall reduce our CF, and impact on
environment. There is birth control and other ways of not forcing
people not to have children. For food there is easily enough space to
grow it, 1/3 is wasted, then there is over-eating.
We eat too much food in the Uk and globally . 800 million
people are clinically obese, of BMI of above 25/30 you are clinically
obese, above 40 and you are morbidly obese. But 2.2 billion
are classed as overweight, whether based on BMI , I don't know.
The stats that global organisations are using now.
1 in 3 adults, many children as well . This includes the people
who don't get enough food. Perhaps partly due to lifestyles
, not enough exercise, but mainly due to excess food intake.
Some people work so many hours, there is stress , and no time to
make soups out of chicken carcaces. We used to do it , but have we
the time these days?
And the convenience lifestyle has led to quick food , which is oftn
highly procesed .
In war most of the men were at war, women in the factories, on a massive scale.
People did not have time then. The government had control of the
food supply . These days we can consume what w ewant, when we want
and so we do.
So is food too cheap?
This was raised by Molly Kato? the MEP green party member.
Food is too cheap , you could say. But another tricky issue. About
1million people dependent on foodbanks in the UK. Bu tthat means the other
64 out of 65 million , have a high CF , because they are affluent enough
to be able to do so. Then all the knock-on effects, the health
service along with lack of exercise. Built around consumeism
a lot of issues.
I went to a do at Reading and there was loads of different insects to try,
but when I went on line , very small packets available, for ten pounds or so,
what is the CF there?
Eating insects. A speaker here on that Jenny Josephs. The CF is very low,
nutritious for the protein content. It has the potential to be
sustainable. Also with some energy solutions, we're not quite there yet,
bu tthe potential is really good. They can be stacked so little land use.
Hardly any water .
So why are they so expensive?
Like anything early on in the market staging. Not selling enough to be
able to produce at a cheap rate. We can talk about artificial meat,
meat substitutes is another one, insects is one option.
This is the impossible burger. Its new a meat free burger. The difference with
quorn burger , lowC but unpleasant taste, so not bought.
You don't buy such , because its a highC burger , you want something
tastey. Thats what will dominate people's purchases when it comes to foood.
The impossible burger launched in US , last year. It is plant based.
After 5 years of research , they've got haeme, as in haemoglobin , the
thing that gives it meaty texture, now harvestable from plants.
They even sizzle when cooked, the correct texture, burger-lovers,
meat lovers are tasting them and giving positive reviews.
This kind of thing can be the future. 1/8 the greenhouse gases of
a normal burger, hardly any water or land requred.
Is it heavily patented?
I suspect it is. They're quite open about the ingredients,
potatoe protein, coconut. A small start-up company in San Fransisco.
In regular burger joints its about the same price as meat ones.
Not mass-produced yet, and there are competitors. Meat free
pasties etc are becoming more common in UK shops and becoming
more tasty, so we're getting there.
In Oz there is a chain of takeway shops called Lord of the Fries.
They diversified from chips to burgers, but they never said it was all
mock meat. They've now admitted its been vegan , the whole
time thet've been serving it. Its of couse now a selling point
and thery're expanding exponentially. ?
A nice point. Jenny Josephs who gavea talk here . I was with her
Saturday where we both gave a talk at a festival. I did the general
stuff and she focused on insects. She did taste tests with meatballs
, sausage rolls, 50% pork , 50% mealworm or whatever, crickets etc.
People usually cant tell the difference between full meat and
mixed with insects, or prefer the insect ones as more texture
or just nicer. So don't knock it until you've tried it.
For a meat eater , more than 100gm of meat a day on average,
your CF is so much, for a vegan its about 40% of that.
More and more people are eating less meat these days, but still
a vast majority are committed meat eaters. We think only
3% of people are vegetarian , less than 1% are vegan.
Fish is a very lowC source of protein so the pescatarians
are only saving another 2.5% by going totally vegan, cutting out the fish.
You can save 12% by staying meat-eater but switching from
beef and lamb , the methane producers, to pork and chicken, the same amount.
On fish, the actual CO2 evaluation. They are wild caught are different
to farmed fish, with added nutrients. Is there full account made?
The data is based on real assesments that have happened.
We won't know the exact CF of every single fish, depends where its caught
etc. But generally wild-caught fish have very lowCF because
they feed themselves , compared to chickens say , where they need fields and
fields of soya and othe rgrains , to just feed the chickens.
Sending a boat out to ctch a load of fish, its not a huge amount of
diesel. There are fish that should be taken from farmed sources, halibut
is one species. Halibut is a popular fish, high demand. It takes a long time to
mature, 8 to 10 years. We've stopped intentionally fishing them
wild now , quite a lot as by-catch though. Good in many ways, it gives the
wild halibut the chance to recover. Farmed fish can be done at a
sustainable level, possible to factor in ethical measures, whether happen
or not is enother debate.
Fish farmed or in the wild still produce the same amount of faecal matter ?
They are cold, so their metabolism is much more efficient. Cows are chucking away lots of their energy , just to stay warm. It is an inefficient
process to get protein for us, using anmals that use a lot of the energy
just to keep themselves warm.
Would it be advantageous to consume wild game, as compared to ? game
Possibilly, its never cut and dry, it depends on the specific situation.
My stock answer for this, is do it at a low enough level, then its sustainable.
If sustainable in the ecosytem ,those wild game have enough food
and all part of their normal habitat , then fine. But more often than
not, the population of humans , going up exponentially and dominating the
Earth, in the last 100 years, it tends not to be sustainable. So we have farmed
or mass produced alternatives to meet demand.
At the moment we are over-run with deer in the countryside.
People are talking about reintroducing lynx to try to keep the deer
population down? so a good argument saying eat more venson?
Wolves in Scotland . Same with kangaroos in Oz.
Cutting down forests for soya production , is that the main driver or
cutting down for the sale of timber being the main driver?
Mostly the driving is for agriculture. For some tropical areas its for
logging , for the timber as well. But mor eoften its so the land can be
quickly cleared, to grow food as that is where the money is.
Eating organic. I beleive it is preferable as artificial fertilisers
ar ecut out, soil is preserved. There is a question about how much you
can get from the land. Permaculture with mixed crops , get more from
the land used in intensive agriculture. Could you give more clarity?
The film Tomorrow, a recent film. Its basically saying our current
system is broken. Not susstainable , we're having impacts at all
levels we need to reset society and think locally again.
Not just the food system , educataion, the economy what we
spend goes out to big multinationals rather than local.
Permacultur eis one aspect of that. A certain amout of land can be so
much more productive, but its labour intensive.
Again it comes back to busy lifestyle, it will come back to just 1%
of people who care enough , to do permaculture and have a minuiscule CF.
In my mind it will only be a small percentage, not the majority.
Permaculture is a nice idea, do it where we can.
With organics , yes an absence of fertilisers is a good point.
CO2 going into the atmosphere from all our energy and transport use.
Methane the cows are producing , that the paddi fields are producing,
the landfill is producing . Its 25 to 40 times higher than CO2 .
The nitrous oxide from fertilisers is 300 times more than CO2.
I was told fertilisers were the biggest contributor within food production?
Not true , its the biggest in terms of 1kg of fertiliser used ,
mor eglobal warming potential than 1Kg of methane emitted, true.
ut overall there is a lot mor emethane emitted than nitrous oxide.
Artificial fertilisers are bad in that sense but on the other
hand allows food to be grown quicker , or bigger or mass produced.
A second thing, not fully understood at the moment , is the soil.
Soil degradation, mass agriculture intensively, is damaging the
soil . Again ok in the short term , but like deforestation, short term we can
have fewer trees and we'll survive. It means our total ecological
footprint on the Earth is way beyond 1 planet earth.
We know hte oceans absorb a lot of the CO2 trhat would otherwise
cause global warming. The rest goes into the atmosphere and that is
driving GW. The third big C sink of GHG emmissions is the soil.
That bit is not well understood yet and intensive agriculture,
degrading the soil , is reducing its ability to store the C.
Potentially a big crunch point that will lead to runaway climate
No mention of GMOs. For example producing a better shelf life
and less wastage. Or produce rice that requires less water etc. ?
Again GM products are poorly understood. A bad time in the press.
Also a failure of science communication, badly comunicated to the
public and so generally the feeling is that GMOs are bad.
What might happen , and so a fear around it. But
actually it is scientific progress, solving real-world issues .
So developing a more resistant strain of a grain or veg , can
potentially feed starving people and do it where there may be drought
or flood tendency, and not loosing a whole crop.
A great potential for GM. There will be legitimate counter
arguments and exceptions.
As consumers we need to accept more responsibility, for what we
consume, more conscious of environmental impact, but there is also
a role for marketing to be controlled. The BOGOF business
and continual encouragement to buy more all the time, and then
throw it out. Can you see legislation to stop some of this
over-marketing of things? The wrong shape cucumbers thrown out
on the farm, even back in the 1960s?
There isa role for governments certainly. Brexit will mean we'll
lose a lot of the regulations, the EU currently has.
So implications there on environment and bio-diversity.
There is also a role for culture. There isa slight shift in culture.
Competing supermarkets are attuned to this , one supermarket is
marketting funny shaped cucumber and bananas and making that
a marketing thing now. Someothers are dealing with waste, Tesco
and Sainsbury, there. In France the government has told supermarkets
they are no longer allowed to waste food, they have to do something about it.
Legislating to make that happen there. I believe personal
actions can make a bigger difference overall.
The environmental argument tends not to be the main driver
for most people, hence my interest in the impossible burger.
Its usually taste that is the bigger factor with most people.
More than ethics, more than environment, nail that one.
Adopt some of my ABC measures of LowC eating , you can
potentially save something like 500 to 1000 GBP a year as a family.
Avoid the avoidable food waste, not have no food waste as not
realistic. Buying in-season food and switching to lowC foods more.
not extreme veganism , just beef to chicken change, modrate
amounts etc. A recent study showed that if we reduced our meat
consumption not to vegetarian or to vegan , just to the level the WHO
is saying is healthy for us. That would cut our CF by 1/3.
Go vegetarian it would be 63% off your food CF and going to
vegan is just another 7% to 70%.
When wil lwe see the CF printed on packaging?
Its unrealistic unfortunately. Some people have done
red amber,green markings but that is more to do with health.
Consumers want to know if its tasty, then whether it s
healthy or not for you, fat/protein/carbo breakdowns.
Peartly not enough appetite for it . Also too difficult as not only
depending on exactly what piece of fruit it is, whether
it was in-season when picked, the time it was shipped ,
refrigerated , th eboat route.
How many smokers wanted to see death warnings on their fag packs?
Ultimately it is legislation but I doubt there is enough political will
to make it happen. It becomes important to think of the health
arguments. THere is a good correlation , cheaper foods
generally are lower C foods. Also a reasonable correlation
between the health of food and CF of food. If you wan tto move to
a lowC diet , that cuts out a lot of meat, almost certain to be
cheaper , but also healthier for you.
The carcinogenic properties of red meat and processed meat,
and other comparisons can be made.
What is hte CF of beer?
Beer's a funny one, and also is it vegetarian. They often use
fish guts/ininglass to fine the beer. There is an alternative
used by Budweisser etc. The CF of beer is not easy to
answer . You could do a C analysis of a particular beer.
It has grains in it . For 1Kg of beef the CO2 equivalent is about 18Kg,
for cheese about 12, go to chicken or pork another third off,
about 6. Go to rice its about 4Kg. Wheat comes in at about 1.3Kg.
It will be higher than average fruit or veg in-season, but much
lower than meat or dairy products.
Fruit based alcohol , might be lower than beer.
Banana wine, the ultimate solution.
Could there be a cultural movement to wards beef and lamb
being considered a treat rathe rthan routine, as a way of moving
people towards less meat?
Some chefs have hooked onto the concept of meat-free monday.
I don't think it goes far enough but one good thing about it
is it gets people realising they'te not dependent on meat.
Its become normalised to have meat, if you have a meal
it must have meat. One of my main take-away concepts I emphasise
is move away from beef. A beefburger is 3 times CF of a chicken-
burger. Move away from the methane-producing ruminents of beef and lamb, and go
to chicken and pork.
An average cow doing its thing in a field , eating grass, regurgitating it ,
generated something like 300 Litres of methane a day. So much the same
asthe CF of a car use in one day , just from a cow being alive in a
field for a day. Lamb raising tends to be on uplands and poor grazing land
not useable by anything else.
If we go back to the sustainable level and have lamb once a month,
as a treat, then maybe that scenic upland life is the reality. But if we have it in the
quantities that we currently consume lamb, there is the hidden
reality of tens of thousands of sheds for raising sheep. Certainly so
for beef and pork.
Is there anything you could use the methane for, if you could harvest it?
Its not feasible. Its one of those out-there ideas. When we consider landfill
and waste management we do use the methane now.
Its sealed landfill these days , capturing the methane and it generates
electricity, relatively small quantities ,but mainly its not going
into the atmosphere. Change what you feed cattle can
bring a reduxtion of methane down about 50%.
A specific mix of plant foods will produce a specific micro-flora
in the gut , the metabolic pathway changes and less methane produced.
A common misconception is cows farting out the methane, but a higher
percentage is belched out. A bag over the rear end maybe possible but
the more necessary bag over the eating and breathing end is not possible.
Changing the diet of animals, you have to again consider the CF
of that alternative low-methane food.
A chef friend of mine , his dream is to produce vegetarian meals that
people would find captivating to look at and be tasty.?
Another way is via rice mechanisms. Say at Glastonbury there are loads
of vegetarian and vegan options , but what I do like
is where the vegie burger or curry is a bit cheaper. That often
does not translate down because of mass-production issues.
Composting query. Garden compost heap it breaks down , releasing
gases to the atmospher. Compare to going to landfill and anaerobic
We have loads of people working on composting and also anaerobic
digestion as a future solution for waste management.
I don't really like it as a solution in this context as it basically
legitimises food waste. Whereas high C , human grade food ,
should be eaten and not wasted.
Monday 14 August 2017: Dr Alex Dickinson, Soton Uni -
Engineering Replacement Limbs - a Global Challenge
19 people, 1.5 hours
Injured vetarans and services people have raised the profile of the
people without 4 limbs. This interest has allowed us to generate funding
for research into lower limb prosthetics(P).
But they don't represent the majority of cases, people who have lost
limbs through trauma or infection . Such as Johny Peacock represent only about 20% of the population who've lost limbs. Diabetes and vascular disease
account for 80% in this country. So we try to learn as much as we can
from the highly functional amputees, to develop technplogies to
help everybody. The clinical need , is from someone who has
just woken up from lower limb amputation to someone who
is fully rehabilitated. Still in 2017 , the majority of ways that a P limb
is designed , is through a process of plaster casting.
So a negative cast from the remaining limb, turned into a positive mould.
Then a series of rectifications to the shape. Changing in a very strategic way
to get a target load transfer. Below the knee amputation, trans tibial
and posterior view. The prosthetist (Pt) has a few target areas , where they're
trying to load the limb. Feel around one of your kneecaps , the bony kneecap
and then a bit farther down another bony lump, the tibial tuberosity,
where your quad muscles from the front of your leg join
onto your shin. Between those 2 boney lumps , there is a soft
spongey bit, this is the patella tendon where your kneecap
attaches to your tibia. That is a very low-tolerant area
, you can press on that all you like. The pt makes a change of shape
so you can bear load there. They want to avoid bearing load on the
resitual tip of the stump, because that is very sensitive.
A GRP tibia with the load bearing ends are relatively large,
but do amputation surgery , cutting through the middle, the
cross-sectional area is much reduced. So you would expect the
pressure to go up . Also if you feel on the inside of your arm ,
its similar to the skin oon the back of your calf, its very soft and delicate
in comparison to the skin on the palm of your hand or sole of
your foot. The tissue is not designed to take the pressur eof walking thousands of
steps a day. So requires an experienced Pt to do this rectification
process . Take the positive mould and with a file or surform
, remove material from under the kneecap , and then perhapsa tub
of plaster of Paris and build-up material o nth etip of the cup.
Once she is happy witht he shape , she'll try a trial socket.
Polypropolene , still in 2017, a big sheet of it, in a frame, placed in
an oven at 200 degrees until the centre dips a couple of inches.
|Place it over the mould, suck out the air via vacuum, so vacuum-forming
a trial socket. So you can see the indentation that will go under the kneecap.
A square cut-out at the back , so the subjec tcan flex their knee.
Then an ireative process, by which the Pt, gives it to the person its
designed for and see how happy they are with it. Much like snowboard
boots, with a heat gun can make modifications to regions that are too
tight or are not pressing hard enough. The problem is, a lot of people
who have lower limb amputation , from vascular disease, they loose
their sensitivity i n the soft tissues, so they don't know they are
pressing too hard. So we make it transparent , so as normal with pressing
anywhere on human skin it goes white. But people with vascular
disease often loose that response aswell. The result is that in the first year
after amputation the average is returning to your Pt , 9 times.
That is data from across Europe. In the Uk perhaps not so many
as difficulty in getting the appointment. The rehabilitation success-rate
via this sort of process is about 50 to 60%. People with this setup,
have a dilemma , do I tell my Pt there is a problem or put up
with it ,bearing in mind I'd be without my leg during the 7 weeks
of modifications. It seems wrong that in 2017 people are having to
make that kind of decision.
In 2012 I thought, as a mechanical engineer , how could I help
ways around this. I was in the position , many find themselves at uni,
where I have to justify myself into staying there instead of a post-doc
role contract of 18 months if you're lucky. My prof who took me thru
the PhD was an expert in artificial joints . So what other area might
the techniques I'd developed , be useful. What tools do I have that may
be of use. The goto quote , for mechanics, is from Lord Kelvin.
"To measure is to know, and if you can't measure it,
you can't improve it". So how much of what Pts do, is actually measurements
of what they do. Extra data they could take out of the processes they are
using, so at least they have a record of it. What I thought was a brilliant idea,
I soon found others had thought of this also. So CAD/CAM techniques
in Ps. We no longer draw stuff on paper any more . CAM is a collection
of technologies, CNC a lot Computer Numeric Control conventional
machining methods controlled by computer. 3D printing/ additive
manufacturing is the latest. Pts started to develop this in 1980s
and until 2000 until use in any number. The Pt will use a scanner
to capture the shape of a residual limb , digitizing it, create a computer
model , then they can progress the rectification process in a CAD
environment. So the under-knee indentation they can create, and at the
front of the tibia they can remove material away from the limb,
so not pressing on the shin bone. Also the fibula head on the outside
, a nice structure that can be presses avoiding a nerve that passes over the
top of there. They can now make more accurate and quantitive changes to
the limb shape. Then CAM via milling , start with a large polyurethane
block , placed on a turntable and a multi-axix robotic arm with
rotating milling bit, carving out, in theiry, exactly the same shape
as in the design. These robots tend to be in their own room so the
dust generated is not inhaled by the operator. The robot does a rough
machining operation , including a sneeze function , to clear dust.
So I looke d at how we and th Pt could do more with this.
They would use these new processes but carry them out in the same
way as plaster-casting. They'll know the regions they are looking at ,
for changes. While they can make quantative changes, they are still
like free-hand sketches on the limb-shape. So we take an
aquisition, a scan, we could bring in 2 computer-shape files ,
represented as meshes , a series of points or vertices, joining the
points into triangles. The 2 colours represent 2 scans of the same
shape . Then we can do imprecise alignment by translation and
rotation , by hand. Then we can do a more accurate alignment
by iteration of a process called closest-point matching. This gives an
automated process by which we can align the shapes. Being automated,
it is less likely to be subjected to human error, ie I don't have to
be a fully trained technician to use this. You can see different regions with
mor emismatch . If exactly the same shapes, al you'd see was
noise, no mixture. Thena final process called registration where we
map one shape onto the other, allowing a point to point comp[arison
between the 2 shapes, w ecalculate the Euclidean error.
Pythagoras in 3D, RMS in 3D. We need to present the data in an
interesting or at least accessible way. We produce a colour map of the
shape deviation , just 1 colour is slightly higher deviation.
We only have high errors around the interfaces, the place where there
is some human input, so the case for automation.
We used this with a project from archaeologists, comparing different
teeth. A P socket is more or less the same shape as a tooth.
How might the P community be interested in this research.
Were we using the right kit. Same technoique using state-of-the-art
scanner , relatively few NHS clinics have been convinced to use
so far. Its a structured laser scanner , about 30,000 GBP .
Inevitibly a barrier to it being taken up. We are then using something
very technological in clinics compared to something that was very
tangible , manual and experienced based procesing. So we need to
introduce such changes in a sensitive way , so it does not come across
that we are trying to replace the Pt experience and skill. It has to
be a tool to allow application . Such scanners are usually deployed
on automated car production lines , scanning pressed panels for example,
so extremely accurate. So we though we'd try characterising how
accurate. So we 3D printed a test piece, so we had a good idea
of its accuracy, at least we know what shape we sent to the 3D printer.
So colour scale-bar 0 to 1mm , 95% of the surface comes within 0.16mm
, comfortably more accurate than 1mm.
So an Amazon purchase for a 300 GBP scanner, but is it good enough.
Extended colour map now 0 to 3mm and the accuracy is about 1.5mm .
We can see a systematic error, along the length , which is interesting.
We can correct for such systematic errors , but its still imp[ortant to
try and characterize what the error is. At that stage we did not know what
error is important. Prod some soft tissue on your hand , it takes very little
force to move it 1mm, so maybe 1.5mm error is good enough.
Secondly we thought it might help the centres that already
have this kind of technology, to develop best practise in their
limb fabrication. S oafter they've designed the limb , does what comes out
of the fancy robot, actually match the original shape. We can take the
shape we sent to the cutter, we can subract the socket shape that came off the
mould and use the same colour map . We can then see how errors
manifest across the surface, so this is relatively reassuring.
In the concave regions we have a larger surface error, gives some
confidence as the shape is created under vacuum , release the vac and
some of the material will spring back. Around the periphery a "blue"
negative colour, some interference, causing the shape to spring back.
A sanity check - where we put some real physical data into our
computer programme. Also can we try to inform socket design .
instead of telling a Pt how to do it, we can take in large amounts of
data from previously designed socketys, which have achieved a successful
outcome and give them info on what is a good first-guess socket
for an individual.
So an example showing blue , where pressed in under the kneecap
and the red regions where the socket is larger than the limb , where
material is added to bear load. On a population scale we put in a
lot of stats, work just progressed into a recent paper.
What is the effect of the design on the soft tissues. These soft tissues
have to change the job they're doing, between healthy pre-amputated state
to when having to bear full body weight, on the nice soft skin that
was on the back of your calf, not used to bearing any load.
How to change the tissue for this new job, to become more durable and
tough. Like learning to play guitar and callouses on your finger-tips.
With that , if painful, you can leave the guitar for a few days and then
pick up again , but not really an option here. Any pus ein the
rehabilitation process affects other aspects.
So if there is different prpcesses adopted by Pts , can we put some
evidence-base behind the design process.
3 design processes a Pt might consider
The Total Bearing Socket TSB, very little change between socket
The Patella Tendon Bearing, particularly relying on the press
fit withthe bearing surface under the knee, we make focal changes,
technically more of a demanding socket to create.
Not used so much these days KBM socket from the 1990s , a German
name. Some marked press-fit regions above the knee, grips the limb
side to side, a much more bulbous shape around the residual limb.
So we can describe these shapes by looking at the outsides of the limb
and the design of socket. We can use imaging to try and understand
what is happening insode the limb, how the socket is being
reconfigured by the socket design. Using MRI scan slices , showing
ups contrastining between different soft tissues. The bones, residual tibia ,
femur and knee cap, tendon , the layer of skin and some of the
muscles , some calf muscles wrapped around the end of the limb
and sutured onto the front and the fat-pad on the tip of the limb.
So well established amputee where the muscle starts to atrophy
, as not used in an established way, and transforms into fat-pad.
We can see the marks of the more marked rectifications in a couple
of cases. The KBM socket has manipulated the soft
tissues to move upwards and backwards.
Then the triangular shape of the PTB socket , pressing either side of the
shin so we dont load on the shin. There is very little change of shape
to the vacuum formed.
So taking measurements to see what those changes of shape actually
cause. Unlike designing a piece of Aluminium airframe, that has been
heat-treated the right way , we can say wityh a great deal of confidence
when it would fail. A lot of mechanical engineering is structural and stress
analysis . We know hte stresses, compare to the material strength
and know if its strong enough. Soft tissue material vary dramatically.
Vary person to person . Just because something does not actually fail,
doesn't mean it will be comfortable. Some people have a diminsihed
sense of what is comfortable, and these people may be the ones we
have most concern about, in having soft tissue problems.
So we also take a series of biophysical measurements, to understand the
effects on the residual limb in compression and shear, in terms of changing
tissue oxygenation for example. And inhibiting the lymphatic flow,
the way waste products are removed from more distant tissues away from the
body centre. We just today submitted our ethics application
, as even just self-experimentation , it need ethics approval.
So as an engineer , who else migth be interested in such techniques.
This is a global problem . There are predictions that by 2035 there will
be a half billion people with diabetes worldwide, disproportionately
affectring the developing world . 100 million people worldwide need some
sort of Pt device , and 90% of those don't have access to the services
providing them. The access problems include lack of funding
and infrastructure and also personel training. Finding certified
Pts to provide these limbs is a real challenge.
We went to Campodia, somewhere infrastructures that may be
starting to be in place, and be at least receptive to what we have in mind.
Could our crazy ideas be useful. Between 1975 and 1979
about 1/5 of the population diesd in genocide . The polpot regime
determined that the population should return to their agricultural
origins , closing down all school, universities and hospitals/.
The borders with Viet Nam and Thailand were covered with landmines , to
stop the population leaving. Not just the people trying to cross
the borders but also the soldiers patrolling the borders, ended up with
lower limb amputations. So homebrewed peg-legs , very basic,
but people wearing them every day. Even the soldiers , the majority
prior to Polpot, had been agricultural workers. Once they wer einjured
they had ro return to agriculture, trying to work with 1 limb or even both limbs
missing. So not just walking the streets but working in padi-fields
for 12-14 hours a day. In 1990s the Cambodia trust was set up
, providing Pt limbs to anyone in hte country free of charge.
A Pt broken at the angle, held together by tape, turned up
a tthe clinic , and repaired and he returned to the very physical work.
They are all produced by a Red Cross unit in the capital Pnom Pen.
Standardised limbs , example passed around. So someone is injured, they
have medical treatment, they go home to a different part of the
country to convalesce with family . So you take your Pt with you
and can then walk into any clinic and have components replaced,
there and then. All the brown polypropelene components
of the limb are completely recycled. The condition that allows you to
take away a new limb with you , is you leave your previous one withthem.
A large box at the rear of the factory with a lawn-mower engine on the
side , and a blade inside. It turns the plastic back into granule size pieces
that can go straight back into the injection moulders, for new ones.
Nearby is the artificial leg and rubber processing company, 12 grandchildren
and grandfather produce 500 Pt feet per week, the foot on the
limb passed around. These Pt feet have proven to be the strongest available
for use in a rugged environment. They start with various grades of
synthetic rubber , in shhet form. Roll them on a table and produce a
pre-form , then an injection moulded nylon heel , and the rest from sheet
black rubber . Then squares of more flesh coloured rubber , around the
outside, placed in moulds and placed in an oven, then tidy up the edges.
This process runs continuously . 500 a week produced by just 1 family.
The first challenge is funding. One of thesePt limbs does not do the
job for the rest of your life. In the UK its estimated at 1000GBP
per year for a single limb repair and replacement, for the
rest of your life. If you have govt or national funding , then economics come in.
For a given pot of money its better to fund road safety measures, as now a lot
of the minefields in Cambodia have been cleared, the main way people
ar einjured from road accidents. Road use is increasing dramatically
and exceeds the infrastructure in place. Peole doing agricultural
work , the money they earn today, is spent on food for tomorrow.
They can't simply take 2 or 3 days off work , to go for treatment
at one of the 11 clinics across the country. There are many there who are
completely unaware that Pts are available and schemes in place.
Even medical doctors can be unaware of the services available.
There is a clear difference between what is considered medical
and what is considered disability. So can some of our developed
techniques advance the access. So would the 300 GBP scanners be
sufficient to characterise the shape of the residual limb. So why
are we talking about this extra cost when a sack of plaster
can be bought for next to nothing. The Arts and Humanities Council put a
nice statement together on this , which summarises a lot of past
experience. Many scientific and technical interventions
continue to fail, due to a lack of understanding of the social ,
cultural and historical contexts and their likely reception
be th e people they are intended to benefit. Some notorious
examples of this- The National Formula milk scandals from the 50s/60s,
where people sent formula milk to Africa. Then people were unable to produce their
own milk , to nurse children. When the formula project ran
out , there wa s a famine. More recently some of the ways the
Ebola outbreak was managed, without consideration of some of
the cultural , social and traditional aspects that were very important .
So we have to make sure, that just because we have a bit of tech that
works in the UK, and if we find a funding for other parts of the
world; it doesn't do more harm than good.
A lower and middle income issue , in general. We try to take an inter-
disciplinary approach to understand what the requirements are ,
in other countries, without making assumptions involving what
we have access to. Also sustainable business implimentation.
Tools and spare parts have to be considered. We have the mechanical
engineer, the physiotherapist for the scanning, a health-care
psychologist a qualitive researcher. I was trained as to being
purely quantitive . Its a process of understanding what people
really need, as a mechanical engineer , I never saw that aspect.
Also an entrerprise fellow , in the faculty of health sciences,
who understands business modelling techniques , how a business
case can be built for this kind of tech.
We are in Soton, how do we test any ideas with the people who
really matter. So we work with the International Society
of Prosthetics and Orthotics, and also th eCambodian School
of Prosthetics and Orthotics. The first fully certified by ISPO
asa training school in SE Asia.
(Orthotic are an addition to assist the body like hearing-aids Orthopedics are
replacing missing parts of a body)
They now train the whole sub-continent , who end up in Africa ,
the Pacific Islands and across S America. These people have the
influence to implement ideas, we eventially came up with . are
actually workable. We've been able to answer questions that
we could not ask in the UK.
The casting process, the most important element is the Pt
thumbs. He identifies the regions around the knee, around the
patella, the tendons. When they are roughly confident about the
shape , they press either side of the patella tendon with
their thumbs. Blinking between the 2 images , you can see the
shadow created by the thumbs. They are already rectifying
the socket , when they are taking the original cast
when the plaster is still wet. When they take the cast off
and lokk inside , the residual limb is covered in
cling-film , they draw around the regions of interest with a felt-tip
pen , which transfers to the inside of the socket.
A human very much involved in this process , so whenever
we have a human, we probably have some variability.
A question answered in Cambodia, not available in the UK,
just how repeatable is the casting process.
So 2 clinicians taking pairs of casts of a small group of volunteers.
So pairs of nominally identical casts . We feed them into our
shape comparison system. Repeat casts of the same person ,
done one immediately after the other . We can start to see the 2
thumb-prints , rendered blue in the images. Then we can see the red
zone where material is added on the tip of the stump.
Also the red stripe down the front where we've added material
, pressing on the sharp edge of the shin , the bit that hurts when
banged against a table. So does someone get the same result, one time after the
othr. So answering the fundamental question, how accurate do the
scanners have to be. How much do we need to spend on them.
So this is being used now in a couple of projects in africa .
Its important that the Pts tell us the reliability of the tools they've
been given . These results are hot off the press.
So is there scope for these technologies in Cambodia.
In the small local market selling chickens and edible tarantulas,
a booth selling second-hand mobile phones. So you can buy a
reconditioned Iphone, for 1/4 the cost in West Quay.
Their technological development bypassed the dial-up
period we went through. They've gone straigh tto 4G connection,
they've got beeter 4G than I can get in my house
200m off a main road in the UK. So access to data , via
networks is far better . So we are trying to develop appropriate
data technologies around Pt and orthotic processes.
The interesting word is appropriate . The limb I passed around is
what I'd call appropriate tech. Its not the most advanced Pt limb
in the world , but it is appropriate to the communities in Cambodia.
So scanning systems that collect the right amount of info ,
ways of presenting it in the right way, feeding info back to the
users . You can often now connect the 300 GBP scanners to your Iphone
, certainly to an Ipad, transmit it back to the Pt clinic.
They can transmit back to the user on their Iphone, the info
for care of the residual limb tissues, how frequently they
need cleaning, and the socket. How often is too often to be cleaning.
Reminders about the rehabilitation process, via audio and
video demos. Tell them how to repeir their own Pt limb, so they
don't have to take 3 days off and return to the clinic.
The charity does reimburse people, loosing work , to have to
return to clinics. Techniques like this might prevent them having to
return to a clinic 1 time in 3, an enormous improvement.
We're not just looking at the technologies , but also ethnography ,
human factors in what people need and need to understand.
Courtesy of EPSRC , 1.5 billion nationwide towards
global challenges research funding , which includes the work explained
here. We've applied for some mor efunding, so diid 140 other
groups and they expect to fund 6 to 8 projects.
Acknowledgements to colleagues and students, collegues at
the Fraunhoffer Institute , Germany who wer e part of the
MRI study , part of a much bigger study . The clinicians
and participants in Cambodia .
If someone has had an amputation , do they ever add a
prosthetic that protrudes out of the body?
Dental implants is an area where thry have the same challenge.
So we use the the same process called osteo-integration , directly
to bone, used for knee replacements for a long time.
But you have something that goes through the skin, a wonderful
environment to cultivate bacteria and other things.
The dental implanters were the first area to try that. A 19%
infection rate, not just superficial skin infections but
nasty deep infections. If we had a 19% infection rate we'd
be in great trouble.
I thought it would be a good way of taking a lot of the load?
Yes that and the feedback. A lot of the challenge in the rehabilitation
process is the feedback , back from a prosthetic limb.
We have limb position awareness without having to see them.
Being able to sense where or where not a prosthetic limb is,
is a problem.
Connect directly to the skeleton and this feedback is very good.
Your skeleton adapts to load change , as well as the muscles.
Why the astronauts on the ISS lose some percentage of bone
and muscle. If you get an unexpected load, say fall over sideways,
bone is not adapted for such loads and get a fracture in the bone.
I met a boxer with a pair of osteo-integrated limbs and he carries on
doing boxing training
Does the phantom limb integrate with the prosthetic limb
The osteo-integrated process has allowed that. I was at a 2013
conference where they presented the first surgery , upper limb
prosthesis control. We can control a hand movement by
EMG sensors electrmyography, electrodes over the muscles.
You retrain the muscle group , that otherwise are no use becaus eof
the amputation . Stick the electrodes on the outside of the body
and use for controlling opening and closing the prosthetic hand.
If you go outside in the cold or the humidity rises the sensitivity
reduces and tendency to loose the control. So sensors are placed
inside the body. Still a relatively small number of people
I previously thought the matching process between the stump
and the prosthetic would be arranging so the pressures were
evenly distributed over the interface, but I gather that is not the
case. Not necessarily maximise the pressure , but increase in some areas and
decrease in other areas?
There are competing scjhools of thought . There is a further method.
The scanning and plaster casting techniques, the big diffwerence is
you are capturing the shape of the limb when its not under any load.
So things will change as soon as it bears weight. So there are some
clever, relatively simple tools . You can vacuum cast or sand cast the shape
while bearing the load of a dustbin of sand .
Is there anything coming from the area of animatronics , remotely
moving jaws and eyes etc for filmic purposes , but brought in to this.
Say someone is going through digging movements, then you can remotely
adapt , via pneumatic systems , when in the right place then lock it
in position. Then go through the casting process?
People are looking at adaptive sockets with sensors that can
change the stiffness, depending on the amount of load.
That is in a final socket, they are things the industry is working on.
Concerning the accuracies of the different technologies and you
get different results , if you have a subject who walks straight
in from the coldcompared to someone who has been sitting in the waiting room
for 15 minutes.
Is that a repeatable change, different people going throught he
same change of environment would have the same reaction?
Too many variables. From an experienced prosthetist, concerning a
subject with 3 new limbs produced, they compensated for limb loss
by adding socks t othe gap. So prosthetists talk in terms of
number of socks. With each of the 3 limbs , they needed 4 socks to
manage the pressure. So what was going on. They had diabetes and were on
diuretic meds. He lived a 3 mile cr journey from the centre, so they
did not take the diuretic before getting into the car, so they were larger.
So mechatronic control , to get the socket to adapt to the
limb. That is the area for people with very expensive private
healthcare. Thats where a lot of the exciting engineering seems to
happen . You can spend 30,000 on a limb but if the socket is not
Is there a system, not strain-gauges as such , but a mesh of perhaps thousands of very small-resolution strain elements on a flexible membrane , that can form into
the 3D shape. Place that in the interface and remotely monitor with the
subject walking or jumping, sitting or standing-up from sitting or whatever?
This was in a PhD paper only last week , so I'm not allowed to
say too much, but it comes down to how few "strain-gauges" you need for the
result. We as engineers would be comfortable with htat , but not so
a prsotheticist. So how to optimise the amount of data that comes out of such
as that and how you present it to a busy operative.
There are adaptive polymers that change their stiffness, according to the amount of current passed . Or sockets made of 4 or 5 arms , then webbing straps with
tension bands , that controls the bulk stiffness of the socket.
But the sensing inside , and whento change the settings , is the complex bit.
Are these fabrication techniques being used in our local hospitals or are
they just very specialist centres, are the scanning techniques readily available now?
Theya re starting to get some momentum . The techniques were developed
in the 1980s , but only mid 2000s did they start being used in real numbers.
A lot of prostheticists see this as getting a worse result much faster,
so a lot of training and a learning curve. Also a sense of threat, jobs replaced
by computer. So if I know I can get a pretty good result by plaster- casting
, from doing it for 20 years , why would I put a lump of
tech between and achieve a result of unhappy clients for 3 months
until we can sort out new problems. One of the ways we think we can use the
emerging data is to soften this learning curve. Help people to understand
how one process they did, went well and another thing they did , was not
successful. Most of the clinics in the UK are starting to
have one of these scanners. Many send the scans to a fabrication
company. So I'm involved with seeing now accurate the scanners are,
what amount of accuracy is required and what is good enough .
Wouldn't it be better to have the subject on a turntable , and a static
scanner , rather than hand-held, keeping constant distance and reducing the
variables before autostitching the images?
Yes. One of the early scanner versions was a halo with 7 or 8
cameras around it , moved over the limb. These single scanner units have
caught on though. The autostiching is done on the laptop that
powers the scanner, no requirement for greater processing power.
My first PhD student looking into how to get it to work on an NHS laptop.
A lot of the things we were doing would only run on a supercomputer
, greta for us in getting published , but ultimately our work
must result in something clinicians can use.
Monday 11 September 2017, Joy Richardson, Soton uni : The Future of Automated Driving
There will be 3 of us speaking; I will do a brief introduction to our team and the research we are currently doing. Jed Clark will talk about trust in driving automation and James Brown will talk about the development of SUDS Southampton University Driving Simulator. We will aim for 15 minutes each.
44 people, 1.7 hours
We're a mulyidisciplinary team within the transportation research
group . We work under Prof Neville Scanton. We have a range of
backgrounds including psychology , engineering, computer science,
software , neuro science and design.
Human Factors Design is to improve human performance and
systems, especially with the introduction of new tech and
automation. We also analyse accidents and make recomendations for
accident reduction in the future. Our research encompasses
aviation, defence , energy distribution , maritime, medical , nuclear
, road and rail transportation and oil and gas production.
Human interaction with tech , transect all these domains.
Human Factors methods can be used to analyse and make
predictions about the performance of individuals , teams and
systems in any domain. The inputs gained from people interacting
with tech , can be used to design better systems and ways of
working in the future. Our research helps inform the human/
machine interfaces in things like cars planes, nelicopters,
submarines , also the design of control-rooms such as
production or military control rooms. The organisation of teams
can be informed by our work. The information flow within teams can
be informed by our work. The ultimate overall aim is to improve
safety. We work with industry and other institutions. Our work has
many practical applications, not just research , they do end up
in cars or control rooms etc in the real world.
One piece of kit in our dept is SUDS the Soton Uni driving Simulator.
A high fidelity simulator, based around a LandRover Discovery.
This simulator allows us to test new tech in vehicles , in a safe
environment , rather than on real roads , which is dangerous.
We can try it all out in the lab , where its very safe.
The simulation software allows us to build different types of routes
for different studies . We've recently developed a route that goes
from Bolderwood (dept base) down the M27 , back on the
A27 , to Mansbridge by the Fleming Arms.
In a current project we are developing systems to reduce
traffic accidents in the developing world. So we're mapping
areas of Hannoi, Bangladesch to build into the simultor
software . The hardware and the software around the
driver allows us to collect loads of data. An eye-tracker which
records the gaze of the driver. We can test response times ,
via a variety of cues . Standard vehicle telemetry , audio and
video recording of the driver and driving.
Another project is an eco-driving project , targetting
significant reduction in fuel consumptoin and emissions
in passenger and light road vehicles. So looking at interfaces that
may encourage people to drive more efficently .
Human behaviour during highly automated driving,
human/machine interface, driver state monitoring
in highly automated driving, predicting real
world effects of such driving and looking at the legal
and marketing perspectives.
So myself and colleages here James Brown and Jeb Clarke
we all work for a highways ? project. In colaboration with a team
at Cambridge Uni and funded by Jaguar LandRover.
So we are researching problems with interfacing drivers
with automated vehicles, above level 3 automation.
There are 7 levels of this automation. In level3
the car can drive itself a lot of the time and when it is doing so,
the driver does not have to pay attention. The next stage , the
driver can do something like read the newspaper , play games
or watch a film. However the car will have some
limitation situations and would require the driver to
take control, at roundabouts or sharp turns, and it knows in
advance that it will have to hand over to the driver.
So we are developing soultions as to how the car gets the
attention back to the driver, in a safe and timely manner, who is doing something immersive. Hopefully JLR will have a clear set of models , methods
and guidelines, involving prototypes.
We have a cycle of establishing the problem , designing a solution
and testing. Currently we are using SUDS , 4 different ways for the
automated vehicle to hand over to the driver.
58 participants seen at the moment, probably 75 in total.
Testing out the different scenarios on them, seeing how they react
and finding out what people think of them. The results will then
be transfered to the JLR test track at Gaydon? leading to a test vehicle
for testing on public roads. Hopefully progressing to JLR production
I'm a senior research assitant at the uni , my main role is running the
driving simulator lab. The Discovery was lent to us by JLR, and we needed
a high performance simulator for human factors research.
It needs to be realistic, the testee needs to feel he is inside a real
vehicle . It needs to be adaptable and configurable. We need to specify the
kind of driving environment and be able to record a lot of information.
So we record control useage , eye-tracking data, video footage.
The software will output a lot of data.
We log all the driver interactions , all the controls are logged, anything they
do in the simulator. The software we use is called ?3 , it allows us to
customise scenarios . We can design the roads that we are driving along.
We can set up different things like somebody jumping out
in front, different on-road scenarios all of which are customisable.
The data is logged as CSV files, so can be inputted straight
to spreadsheets and do calculations. The environment is augmented
by creating 3D objects from the Southampton routes, to increase
the realism effect. The parameters are set for the discovery model
but we can simulate pretty much anything ouside of
articulalted vehicles. We have the open module , a DLL dynamic link library
in DB6 t o extend the functionality , by writing new code that can
then be integrated with other software.
Some of the customised 3D models, the City Gate building at Swaythling
for instance. Starting in Linkscape? applied to a 3D model , Blended?
software. We export those and place them whereever we want ,
scaling and position , so all very customisable.
The car is a 2015 Discovery Sport, with 4 projectors in total
3 at the front gives a good wide-field view . Customisable LCD display rear-view
mirrors . Also turntables under the wheels, as we don't have rollers
as not actually moving. So when you steer it would otherwise wear
the tyres and floor. We have apneumatic system for the brakes as the
engine must not be run in the lab environment. The brake master-servo
would not be running , as no suction from the manifold.
Using the lab pneumatic system we run that through a venturi
to provide the pressure.
Q: Normally there ar ea lot of signals that would be flying around the
car , not there as no running engine. How are you getting
complete behaviour .
You are refering to spoofing. The fact the engine is not running causes errors ,
a section later I'll cover on CanBus.
We have some webcams recording video data, pedal actions
and if the driver is doing deliberate secondary tasks , something other
than driving . So we need to look at the driver while he's doing that ,
to give us more info. So CAN and CanBus, CAN stands for Control
Area Network. This is the system that we plug into,
to get the control data from the car.
It was developed by Bosch in the 1980s and is now pretty well
in all modern cars. Since 2008 it has been mandatory in the USA
to have a CAN in their vehicles.
It allows multiple electronic components to be connected together
, in the same bus, so saving a lot of wiring in the wiring loom.
Accessed by the OB2 ? socket , normally under the
dashboard used for diagnostics and more, often seen in
garages to plug into a laptop.
CAN is fundamentally 2 wires , white and green as a twisted
pair. Messages are sent by changes in voltage, 5V represents 1 and making a
step change it transmits a 0. Combined in a frame which has
2 parts, the ID specifying the priority of the message and also the
component involved. 8 bytes of data and can contain multiple pieces of
data , maybe 8 pieces of info ,16 or 64 bits of information.
This variability makes it a bit difficult to interpret. We raised some
software to do this.
With CAN all components can comunicate with all other.
So why connect lots of different systems together, say, the ABS
system to the AirCon system. THey don't need to talk to each
other but as all are connected to the same bus , if they need to talk
they will listen. A relevant message on the bu an individual
component will respond , if not relevant it is ignored.
High priority messages will over-ride any low priority messages.
So if the wheel system is sending out to the ABS system , that is
a high priority message. Something to the aircon would just be overwritten.
How do we use this data feed. We use an Efad? to CAD interface,
and Arduino microcontrollers, with appropriate shields that allows us
to interpret the data. So 2 CAN nodes and a Windows interface allows us
to test the sytem and software. The data comes out as Hexadecimal
strings and input data in the same format.
When we want to get the control data from the car , steering data,
throttle , brakes, all that is on the CAN, but we need to know
where that dat ais. If you plug into a car's AB2 to read CAN dat a
there is a lot of HEX and you need to know where it is.
So the CAN look-up database, that is usually proprietary
information , not something car companies would release to the
general public, we've been given it for research purposes.
In amongst that data is the ID of the component node, the start
bit , the number of bits , endianess which is the byte order
for scaling in offsets. So from Hex convert to Binary start bit ,
count of fthe number of bits , taking the endianess into account
an integre value , then scale it by a fine offset. Scaling is sometimes necessary
if the value is slightly larger or smaller than it nedds to be.
If something is not zero, then can apply that offset.
I wrote some software that does all this, data in from the
arduinos or from the ?? , interprets it and gives an integre value
for steering ,throttle and brake and any other relevant controls
in the car. We send that to as an open source and we have essentially
a 2 ton , 40,000GBP joystick. The software reads the output from
Spoofing. If you have a car thats not running, it will be throwing out a lot
of errors, even the instrument cluster saying the ABS is not working, tyre
pressures are wrong. Because it defaults to thinking everything is disconnected, which of course it is. I thought about sending individual
bits of data saying everything is fine. But a colleague suggested why not
record data from a normal car, with its engine running, and send that
in . So we save it as a CSV and stream it in and it works brilliantly.
Its all now fully functional and being used for research.
A post-grad researcher looking at how we interact with
automated tech. At the end of the day , when we create these
systems , we're the ones using it and find the weak areas of our systems.
Joy touched on levels of automation .
So bare-bones vehicle lvel 00, with no automation at all.
level1 is where 1 thing is functioning , that could be automatic breaking
or cruise control.
level2 is where there is more than 1 system that can work together and
work for you . Our interest is level3 and 4, when a car can drive itself
, but in the near future we still need a human in that system,
to take over if the situation goes outside its comfort zone
and we have to take back control. An incredibly difficult
problem to solve . How close to actuality it all is.
A couple of strategies that manufacturers are taking currently.
Skipping this human control element completely
and going straight for full automation . Because a human having to
drive with a robot is very difficult. Google and Tesla
are doing this. We and JLR are interested in level 3 and 4.
Self driving on the highway 2020 in the UK, one company looking at 2018
for automated on the roadway. It is legislation that stops us
, the Vienna Convention allows us test the tech 2014 onwards.
I don't know how this represents us as consumers at the moment.
Volvo is about 1 year behind or so. Renault Nissan looking at
2020 for urban conditions. Highway is quite simple , lateral
positioning, longitudinal posiitioning , braking and
overtaking. But urban environments there are pedestrians,
crossings and junctions, 2010 I'm not convinced.
When we talk of humans working with tech , how much
do we trust this tech and how much can we work as a team
with our driverless vehicles .
[Audience participation / group discussion section, involving
5 sorts of tech and our trust of it]
GPS crossings change to roundabouts or vice versa or something is not right.
Isn't it unreasonable to expect 100% accuracy. If i go on a 100 mile drive
and 1 of the junctions it gets wrong. I'm not going to say its unreliable
based on just 1 error. If a system is designed to avoid pedestrians
and only hit one of them on the way, then that might not be aceptable.
I will assume it is 100% reliable , for this philosophical discussion.
You are in control of a runaway train, lets say. It is definitely going to
hit 5 people, if I don't do anything. But if I pull a lever , it will
hit 1 person, what do you do. By extension, lets get to the
stage where all cars communicate with each other. And they can perform
a desent simulation of what is going to hsppen. Lets say that your car knows
that by ploughing into the truck ahead , it will save the lives of 5
people , but sacrifice you.
You will want the tech that you've bought to look after you , but these
decisions are made so quickly , that there is a n element of philosophy
behind the point.
With anything that gives prediction rather than certainties, you will
always have philosophy involved. The lesser of 2 evils , there wil lalways
be this question. For science it is hard to give clear cut answeres on that.
Does my automated car need a morality dial .
The Smart Phone, we trust that it works but its loaded with
apps. How much do you trust them. Loads of different systems in one piece,
not necessarily of known and trusted provenance. A lot to do with
the reputation of the supplier of apps , not just the tech itself.
A lot of the behaviour of any software system is intangible.
So more generally, how do you trust something , whose operation
you cannot sense or see. Cars are a bit like that.
Airline pilots. Well trained, using proven concepts . They are already
assisted by computers, but still a hman element.
Airlines had automation trialled in the 1970s at the forefront of
automation , it is still integrated with a human.
Still big problems , when handing over from the automation
back to the pilot , lots of accidents happen at that point.
With aviation studies, as you have to sense everything , through something
else, you cant just look out the cockpit window to determine height
over ground say. You always have an instrument to go through.
Autonomous cars are a bit easier on that front, but it is something that
has been bugging us for decades. The pilots are not continually
monitoring the automation and when the automation kicks out, and
hands over to the pilot, he has to gather all that data quickly,
and not always quick enough. Exactly this with driverless vehicles
and hand over. We learn a lot from aviation studies , a lot
of my work is how to apply such principles from other sectors,
to autonomous drivers.
Smart toaster. Do you trust your toaster. How does it know
about different types of bread. No one who has had a toaster
catch fire, will ever trust a toaster again.
Driverless vehicles. I'd trust it on the motorway as not too many
variables. Tricky in the urban environment. Its very true as to where the
tech is at the moment.
Loads of reasons why you trust and don't trust various tech.
Trust , in terms of technology, we talk about it in a specific way.
You and the tech achieving a common goal. Trust is whether you think that
common goal wil be achieved within the constraints of uncertainty
Taking the airline pilot, we have a lot of empathy , we know exactly
how humans work. Even if our trust shouldn't be placed i na human ,
per se, because they do make errors, if not more than autonomous
systems. (why we are pushing forward this tech). But we find it
so easy to trust humans. That is characterised by being certain about
his/its actions and whether he/it will take care of us .
We are more likely to trust things that are less likely to harm us.
So the toaster. I trust my toaster to do its job , because not many
varialbles involved , and its unlikely to kill me unless I'm stupid,
it could only burn your bread.
But autonomous vehicles could kill other people .
That is the bare bones arguments around trust.
Trust via the following factors.
Natural tendency to trust or mistrust tech or even other humans
Its something that integrates with tech and its about you as an
Then organistional trust. Built around reputation
of a product, like the apps on the smart phone. Fears concerning
data, are brought about in a very organisational sense ,
or social sense. We fear tech, based on the reputation its getting
and gossip. The culture we live in has a massive impact on this
trust . The norms and expectations. eg as scientists we are certain
autonomous vehicles will eleminate the human error elemt
of road-traffic collisions. At the moment that is 95% of RTA
is down to the human driver. If we can eliminate 95% ,that will
be great . So if we want to push this tech into the future
our expections eventually will be if we don't use AVs , you are
unethical. Society as a whole will make you want to trust it,
even automatically trust it. A strange collection of all these things
come together to give a level of trust in AV tech.
A model comes in on the trust element, whether you intervene or you
trust it to go its own way. Leave it to do its job, there is a chance
an error will emerge in the system and you are over-trusting it.
Conversely if you dont use AV when you really should, again ,
you as the human user, may bring about an RTA yourself.
On x axis , how much you SHOULD trust your tech, on the y
axis how much you ACTUALLY trust it.
We expect AVs to equal the other amount, on the diagonal line,
that is called Calibration of Trust. Tht is the ultimate
sweet spot that we wnt as then you are working as a team
with the automation , intervening when you think it will
crash, leaving it do its job when yopu should.
Q: Your definition oif trust includes vulnerability, you don't
have a value in there that concerns hackers and the converse , I
don't want to run over too many people. A terrorist hacke rwants to
run over as many people as possible.?
The way I'm discussing trust at the moment is , how we use it,
as we're driving. This is personal trust in its capabilities.
The trust you are referring to is, how much would we trust it
as a whole. They are both interlinked , unfortunately this model
excludes that. It does get more complicated getting out to that level.
Q: Are you concerned about someone who behaves as if he trusts it
, they are coerced into that behaviour , in that they would be penalised
otherwise, and actual trust , their inner conviction?
We are very different in our trusts. You are worried about the
Q: You talked earlier of the modalities that society would
become to have. None-the -less as a n individual I might be
behaving as though I trusted it, but its a grudging attitude?
One of those things that us as designers , especially human
factors researchers, we have to consider people like that.
We all have these deep concerns and they're all valid and I
feel as though , if we can create a system in our lives that
has the right level of trust and also where your level of tust
should be accommodated for . How we design that in , I think we
could design it to factor in all our levels of trust.
Q: My take on what you have said is, you are assuming you
could persuade me ?
Yes, as an engineer I like to think I can create something that
is suitable to the masses and beneficial to society atr the same
time. Its my goal to at least persude you .
Q: Almost 50 years ago I was doing a motorcycling course ,
half the course is about what you should be doing bu tit was instilled
into us, how vulnerable we were as road users. So approaching
any situation you have to almost forget what the driver
ought to be doing and reflect on what they might do.
In any transistion situation where you both computer and human
driven cars on the road. The computer has no way of telling
whether the car approaching you at a junction is driven by
a human who might come straight out without looking or a
computer that will automatically stop. Is that one of the
problems of the interface between .?
This opens up a massive can of worms. Different categories
of automation having to react to human or a robot.
There is modelling that says that it doesn't work. it makes things a lot
worse. You want an idea where you have full automation or
full human control. All very difficult , in between.
What we are trying to do is creating an interconnection of
all AVs so they can talk to each other and to road signals,
that is the idea. So theoretically we should know whethe rthe
approaching is an AV. Whether that will go ahead, I'm not sure,
in a research perspective. The cross-over will be very difficult.
We're not certain how it will be done yet, an area we are
definitely looking into. Loads of ethical questions come in there.
Q: Assuming everyone does trust , and reduce the 95% of RTA.
I had one of my lodgers check some site management systems and
found 1291 errors in it. How much are we likely to have AV accidents
due to software errors?
I read a handy paper about the errors that happen at ech stage of design.
From a design perspective a hell of a lot of errors does go on,
so you have to factor in this. So you are saying, we may eliminate 95%
but we will add on others. The way I've delivered this, yes it sounds
great, but you are right a valid concern.
Our AV guys are looking at the learning sofware, treating it as a
system that can learn , DeepMind/ heuristic maybe the terms , I
think that wil lbe the solution. You need to allow it to learn .
Q: With cameras involved on AVs and you have severe weather ,
such as slush and ice , what happens?
The tech we have now, we have poblems with the weather and is
one of the biggest factors that we can't overcome yet. We have researchers
looking at different ways of sensing the environment. All these things
are difficult to solve. I cannot stand here and say the answers are there
or even we can do things in the near future . If we have AVs cming out
in 2018 or 2010 I'd want the technology to be above that.
We as a research group are not concentrating on the physical aspects
of AV driving.
Q: The legal system might put limits on the driver, such as you're
responsible no matter whether you ar eusing a robot system or not
and these are the consequences if you are not responsible.?
Many manufacturers of above level3 have said that the technology will be their fault
if there is a crash and they will tae liability. With level3 , I really should
find out about that one as it is part of my research compass.
Q: There are 2 modes posible in urban AV driving. The car decided its senses ar ecompromised , it cant make safe predictions and so the user musrt
assume control. And also the user may independently drive , because they
wish to assume control, because they don't trust it in some particular
circumstance. So juggling those 2 different modes. ?
The calibration of trust essentially. With 2 humans working you can
share responsibility, taking the blame for things, but can tech take the
blame for things, based on the interaction you've got with it, much more
complicated. A couple of examples where our trust falls out of line with
automation. Most automation failures come from aviation , just becauyse
they've been there longer. The automated throttle from the San Fransisco
landing- they came over a mountain, not taking into account
how much their velocity would change coming in to landing. The
auto throttle did what it was supposed to do but a human did not
understand it, but also you trust it too much. Anothe r sector, railway -
speed restrictions and drivers like to think they can push speed
limits based on their own experiences. But automation does know
better in this scenario, and many railway accidents have occured due to
over-riding of warnings of sppeed. In the 80s it got to the point where
operators would tape over buzzers , just because it was annoying ,
and not having enough trust in the system even though it was preventing
them creating errors.
Q: The pilot error example , was it genuinly the error of that
particular pilot or would all other pilots validly say , that could
just as easily have been us? was it really error in the sense of
someone doing something they should not?
There were 2 pilots in this event. Humans however they come to their
error , it should be ,as designers, our responsibility letting that
Q: Pilots doing thier training should have a thorough understanding
of the tech in front of them. Its not the designers fault if the pilot
has not read your instructions?
Not per se, but can we do something better, is the way I'd
Q: The point of this event was that in different modes the system
behaved differently. Sometimes the auto-throttles did maintain
speed but in the one mode they unfortunately selected, didn't
and doesn't. They asked Boing to change it.
So another misunderstanding about its capabality and a
training issue. Not recognising that they picked the one mode where
the auto-throttle stopped working.
How does this relate to driverless vehicles. We must calibrate our
trust when we are in level3. It will happen in hte next few years
and w eneed to get ready for this. An example of how our
designs can work with humans. A study done on how we can better
relations between autonomous systems and humans. 2 groups,
one would have an automated system that would go as normal,
the other group a symbol would flash up on the dash when the
autonomous system wasn't shure about what was going to happen.
It could not precisely predict what would happen in the future.
This mediated the way that us as humans interact with tech .
You found that,task performance, reacting to a future incident, got
better when this was presented. Also the levels of trust were
rated as better. I think if we can create a system that is honest
with the human and then back again . It is a 2way process, its not the
AV talking to you , you can also interact with it.
It wants to learn from you. We have face recognition tech , to figure
out whether you are aware of your surroundings and we have step
by step processes , how the AV can decide whether it wants to interact
with you or how it should interact with you. THere is anew model
of Audi being released , that if it noticed you were not aware,
for taking over control , it would start to give you loads of cues,
audio, visual and then it would tighten the seatbelt a bit and perhaps
tapped the brakes, if it should become a dire situation.
But its learning from you as well. The designs , in the future , i'd like
to look at , from your perspective, what you want the automation to get
and to do in these scenarios.
Q: If the vehicle is learning from you, what happens when
someone else drives it?
I'd like to think there were settings to either learn your face
or voice, so it can treat different humans . Personalisation is a big
thing for us at the moment.
If we use automation too much , we have areduced situation
awareness , so when you take back control , you have no idea
of what has been going on in the environment. In a problematic
scenario , if you take control , you don't know how to react to
it. That is what my thesis is on, how can we interact with the
human to solve that problem, the environmental
awareness to take over.
Another biggy is skills degradations. How will we teach 16yearolds
to drive eg when automated parking is the norm . Over reliance
on automation can lead to these things.
We must work together with robots , the message I'm here to tell
and I'm here trying to figure out how we can work better with
robots, systems design to treat you as a human but also the
autonomous system as the same system. Trying to solve as a common
goal, we are all in this together and we must design for that.
Very thought provoking on elements of trust. There is something about
our driving and trust. We all trust our washing machines , we
trust taxis driven by other humans, but we are suspect about cars
programmed to drive by other humans. When cars are fully AV,
you will be including old people , young people, who cant
take over control. They will be far more in favour of AVs I'm sure. ?
That is the goal. Level3 at the moment is more of a trialling stage.
We cant just unveil this tech on its own , we must think about
society , build the trust very slowly. We've been automating
things for a very long time. Is there something unique about
driving, in having to trust others, yes. This is the first time
we've seen something so complicated being automated. The driving
task, in our research , shows something like 2000 things you have to
monitor at one time. Some are things that have happened in the past you
need to remember or things you need to project into the future,
its incredibly complex.
Q: Will w etrust AVs only when they become moral agents. Essentially
we have to believe that they have our best interests at heart, before we
trust them. In consequence there is this moral agent where w ethink the
car is going to act on our behalf. Do robots have to be moral
agents before we can interact with them.?
We may have morality as humans . If you have a scenario of potentially
running over someone, your instinct will be to swerve. I would
liken a morality in vehicles to a risk analysis, something that
a human cannot do himself. The response an automation would take
is not necessarily what I would think as being the rational one.
Whereas I'd argue the human would not be that rational in that scenario.
Q: As a human who is not entirely rational, am I not always going to be
wary of something that is purely rational and does not behave in the
way that I would behave. There is also the element of
selfishness, in that if you have an ethical decision that needs to be made ,
say children in the car, you could make AI algorithms that
would give an ethical decision but would be selfish.
A certain proportion of people in the population who are psycopaths
, their view of everyone else is they are just objects. Would you want control
of a vehicle to a completely moral-less individual?
To AI, everything is just data. You have to tell it what to do with that data.
Perhaps the driverless car will forbid you to err.
Are these AVs using the same systems as our Sat-Navs to navigate? and what happens when it drops out.
Present level3 reponses are based on known hand-overs, but we are
also loking at unknown handovers. So if atrocious weather
and the car could not see the road as in fog, or the system/s went down , it would hand back to the driver, in a safe way
Q:There is an argument put forward that humans have made a really
bad job of driving cars ,so automating the process can't be any worse.
I know its a limited number of AVs allowed on
the roads currently in Germany and the USA, bu tthere has been a fatality.
Where the system mistook the blank white side of a lorry , as sky,
and killed the driver plus a few roll-overs etc. What is the statistics so
far of AV compared to human accidents ?
How many accidents per million miles sort of statistic. They do skew that
somewhat as highway only.
Q: I think that statistic would be the best proof of security v insecurity
of AVs. When you cannot prove mathematically that a robot
beats a smart toaster .
Q;There is allied proof in terms of policy making, and insurance , and
when you are trying to sell the car to someone. Statistical measures are
surprisingly difficult for people to get their heads around and
motivate. I think you will find that car manufacturers won't
be jumping to thrust stats in front of you as a potential buyer,
as stats don't sell cars to people. Stats can inform policy makers
and inform the engineering community but its not the thing that will
Q : A bet, in 10 years time , half of the cars sold in the UK will
have a self-driving capacity , for this instance defined as legally
take you home from the pub? My friend who works for Google thinks it will
, but I think the l;aw will not have caught up. ?
Its because you've added the half element to it, sold not exist in that year.
I would say yes. The targets at the moment are 2030 for fully AVs on the
road. To get there you have to do it when peoplke want to get rid of their
cars. So in 2026 ,if you are still selling unautomated cars , then that car will
be on the road for say another 10 years. I think legislation will
take a long time. Also people do like driving and the automated element
of driving is going to be top-end only to start with , as all new innovations
are. People drive MG midgets or clapped out camper vans because they
Q: I've read that a lot of people in driving , like to retain some control. ?
If you're doing long , regular car journeys and you would like to get
some work done, along the way and cannot go by train, it will be used first
in that sort of situation. I think it will be work-driven rather than
Answering a "break" question. Why do we want AVs in the
first place. One is emissions, if you can produce the perfect way
of moving cars. Eg moving away from a junction, the staggering effect,
one person moves away , then another and all the way back.
That is a massive emissions contributor. Imagine eliminating
traffic . Imagine a system where AVs drop you off in a city
centre , then drive home or find a parking space itself perhaps outside the city. You open
up a city with no parking, opening up the spaces otherwise wasted on
car parking. The centre of towns essentially is not there fro
parking , shops, leisure etc. perhaps 1/3 of city spaces could be so
Q: So lets consider out-of-town supermarkets. Consider a future where only 1/10
of the earlier car parking space would be required. This cuts 2 ways , often
car parking raises revenue for the municipality,taxes associated with offering
the public free-carparping. Therre is a loose there as well as a win.
There is the concept of shared vehicles, mobility of service? ,due to this change to parking
requirements, and lower number of vehicles on the road. These carriers
will become , not what we know them currently. They will be a space or even a
Q: The taxi conundrum . If you are at home and want to go to the
supermarket , do you rely on an empty vehicle , coming to you, it is a
negative , having to trverse the road network, using resources just to get
to you, to provide this mobility. Factor that sort of thing in, how much benefit is
there overall? Would half the vehicles on the roads be empty?
At the moment while those cars are in transit, someone will
be in them. But with a sharing policy, any given vehicle
would have greater occupance, assuming an inteligent notification
system for nearest pick-up distances.
Q : A point I heard recently . In the half way stage wher esome of the
vehicles on the roads are AVs and some are continuous human controlled.
Situations say at a roundabout , where you are driving as a human
onto the roundabout, you recognise an AV already on the roundabout.
You just pull out in front of them , knowing that the AV will
stop. And a great number of people will start doing that, until the
leagle eagles can come down on it . Same with a pedestrian deliberately
crossing in front of any AV.
Q: In the interim situation , with some people just like driving
ancient vehicles etc, how do you see it working?
Probably end up with 2 road systems. Maybe you will not be allowed to take
a manual car onto the motorway, and perhaps unable to take
AVs into some urban areas. Governments will have to come to
agreements concerning AVs moving from left-hand drive countries , to right
and vice versa.
Q: I'm a software systems architect and I've worked with JLR ,
not on AVs . Future feature guys are working 5 years ahead, tha tis the
product lifecycle. You don't get a vehicle onto the road without 5 years
of work. So they are engaging with hte likes of insurance companies
via appropriate official bodies. The auto makers know that
what they will be producing in 7 or 8 years time , their ability to
sell that , very much depends on legal and insurance frameworks and
what their stances will be on novel technology. So a lot
of backchat around these sort of challenges. Certainly there is a
task force within the insurance industry has been working on this.
They have a stance on this, how fast it gets into government
policy and law , is an open question.
Q: Thinking of Stuxnet and if a bad-guy gets in control of a distributed
Q: my belief Autonomous trucks wil lhappen first , because the payback
mechanisms are easier to see.
We're currently discussing automatic platooning in a group, vehicles so close together they
are in each other's slipstream.
Human users trying to exit at a junction , while there is a massive
platoon of lorries, how large can they become.
Q:I had a go on your SUDS AV and I'm not sure if I commented
in the process and running audio record. I could see myself
in a level3 structure , I'd absent mindedly forget which
mode I and the car was in. I could see myself thinking the car was
in AV mode and its not.
I do a lot of research on that area. Looking at ambient systems
, so lights up the whole space in automated mode.
Q: I was reading that if you have a vehicle that is in autonomous
mode and it wants to hand over, it can take 15 seconds for the
user to understand the situation that is happening, until they
are in a postion where they can actually take over.
In 15 seconds you've travelled a long way.
Or multiple occupants in an AV, chattering away , could easily
be 15 seconds.
In our research , 47 seconds . Normal scientists track the mean ,
but in engineering , if you have that person who takes 47 seconds
to react to the system, that is a problem. We cant solve that.
There are systems in place that would stop them, whether
an emergency or a routine hand-over , 2 differnt types of
hand-over. We're approaching those 2 prongs at the moment.
The maximum you are allowed in emergency hand over is 5 seconds.
15 seconds is too much.
Q: surely in that situation an AV could mitigate , like reduce its
speed , ie it has general knowledge about things that are
dangerous and things that are not. Slowing down is always a good thing
in an emergency. So not quite ablack and white issue. You may want the
user to take control , because it is aware of an emergency, but
none-the-less there are some mitigational processes before.
Q: On the integrity of GPS etc. There have been instances
in the shipping industry, in piracy situations, the GPS structure
is hacked/over-ridden so the ships think they are in a differnt
Q: Could there be anti-terrorism measures built in,? I'm thinking of the
vehicles driven down pedestrian routes, for multiple murder?
There are systems in place for identifying objects . THe way systems work out
what is in the environment are worked out from general
concepts and teaching computers to recognise . They are becoming very
good at object recognition, eg Google DeepMind learning whether something
is a dog or not. So moving on to whether a human or not is there.
Q: A company in Leicester, that ARM has taken over,
makes incredibly smart cameras. It can look at a crowd, identify
individuals and produce a metric of the interelatedness of
differnt people , dependendent on the direction they are looking, whether inside
personal comfort zones etc. In built processing of this, no cloud
computing required for it. ?
If there is a computer vision question, then it will happen.
Monday 9 October 2017 , Dr Brian King, National Oceanography Centre:
Argo: a fleet of unattended instruments that measure global warming.
Since the year 2000, a coordinated international effort has deployed
more than 10,000 automatic instruments to measure the rate of the
earth's warming, 95% of which occurs in the oceans. The talk will
describe how the system works, discuss the engineering and scientific
challenges, and what the measurements have revealed so far.
2 hours, 25 people
In my early career we were starting to deploy these yellow tubes, floats in the
Argo programme. The name came from a means of measring the
ocean that corresponded to a satellite measurement system that was
called Jason. So Argo a fleet of unattended instruments to measure
global warming. We are interested in global change and
measuring global change. For the measurements we want, we can't have
the right number of people in the right places all the time.
We have to do it with instruments when we are not there. A pic of a
Spitzbergen glacier , in the Arctic, as it was in 1906 , the same view in 2005
, lake the same, mountains the same, the glacier is completely gone.
Even if you said it was a bit of a cycle when it didn't snow very
much, the original glacier thickness would take several thousand
years to build up again, even if it started tomorrow.
We think things are changing because of the greenhouse effect. Its always
been ewith us , an important role in making this planet habitable.
Without it, we'd be a frozen waste with no people. Sunlight comes into the
atmosphere, and passes through quite easily , when it gets mixed up and
reabsorbed as heat, the heat does not get out as easily as the sunlight
comes in. So because of the atmos ,the Earth is a lot warmer than if there
was no atmos. Go to the Moon, with no atmos, then at nightime
its about -70 or thereabouts. As more CO2 has been put into the atmos ,
its made that GE a bit more effective, that is hat is causing the
warming. The GE effect was well understood the first half of C19, light passed
thru gas , more or less easily . By 1859 John Tyndal , interested in climate,
had measured the properties of specific Greenhouse gases, CO2, methane.
It was known then , that these were important for maintaining the
planet at a habitable temp. The first realisation that industrialisation,
the burning to CO2 , might start causing problems was the Swede
Arhenius , gaining the Nobel prize for chemistry in 1903.
In 1896 he considered, if you doubled the amount of CO2 in the
atmos, in 100 years he figured mankind could do that, he predicted a
4 degree rise. It was just a theoretical suggestion at that stage.
One of those occasions where the theory preceeded the ability
to measure it. When you add CO2 to the atmos , a lot of things change.
There are changes in hte cryosphere , the ice-system such as snow,
frozen ground, sea-ice, ice-sheets all those start changing.
Changes in the oceans, in its currents, the sealevel, the plants and
animals living there, changes i nthe atmos and changes in the
hydrological cycle the process of water evaporating from the
sea and falls back over land. Where it falls, how fast it cycles through is the
hydrological cycle. Clouds will change, lots of changes in complex ways,
as we start tinkering with the planet.
The famous plot of CO2 in the atmos, measured at the top of Mauna Lowr
mountain in Hawaii. It started in about 1960, by someone who wanted to know
how it varied over the course of 1 year. He measured each day for ayear , and it went up and down over that year, as the temp changed, the amount of CO2
went up anddown. He repeated for another year, but he could tell it had gone up over those 2 years. He decided to stay and the funders provided for him
staying for 5 years. The series has now gone on for 60 years.
There is a seasonal cycle each year but for each month on the next year the
CO2 level has gone up from before. Its recently passed the 400 number.
Keep burning coal and oil and the number keeps going up.
So is 400 a high number. A series estimated from about the last
10,000 years. Ice-ages have come and gone, CO2 has come and gone.
The estimates for back 5 or 10,000 years ago, from drilling ice-cores.
In Greenland and Antartica, the ice has bubbles of air trapped it,
extract air from the tiny bubbles and measure the amount of CO2
there used to be. It used to be about 250 to 270 , going up and down, but get
to about 1800 it strarts going up seriously. The last 50 years the increase is really seriously.
There have been ice-ages every million years or so. Main CO2 contribution
is the burning of oil,wood and coal, an increasing proportion comes from
deforestation. In terms of global warming, an about equal contribution
comes from methane, less in proportion but its much more effective
at trapping heat. There is nitrous oxides and othe rgases.
In different countries the energy use is different. Across the world, about
1/4 is energy supply to homes, the second is to industry, then to
agriculture, forestry etc. In different countries those proportions
would be different. In Australia 90% of the CO2 is related to
stationary power, a lot of industry , a lot of air conditioning
but not many vehicles. Solving transport in Oz is not that
big a deal .
A graph of global mean temp , usually meaning average air temp.
Everyone measures air temp where they are and then average it all
together. This is not a very good way to measure global warming , but for 200
years thats the best we had.
Look over different periods and the trend lines are different. The rate at
which warming is happening is getting faster and faster. The first 100
years of industrialisation was slow. Now much faster, about 1 degree
every 6 decades or so. From IPCC, what s happening to the climate,
the best evidence. Then the best theoreticians to say wha tis going
to happen. 3 possible scenarios for how emissions of CO2 might
change. If we carry on emitting at the rate we are, in 100 years time it
will be a couple of degrees warmer than now. The other is vcurb our
emissions and the atmos stays about where it is now . What if you
stopped emitting altogether, we all go clean power tomorrow.
Curiously there is a sharp increase in that scenario, to begin with and then
over 200 years it recovers. The initial anomaly is because if you have
chimneys burning coal, not only do you put CO2 into the air
which traps the heat, it also puts a lot of soot into the high atmos , that acts as a biot of a shield, it stops some of the heat getting through. Go to zero emissions the first thing you'd see would be some warming then it would tail
off. Continue as we are then we'll damage the climate very profoundly.
does 2 degrees in 100 years matter much. That is a glovbal average, but the
computer simulation and prediction people can say where that warming will
happen. Nearly all the warming will happen over the continents.
The oceans will keep themselves relatively cool. The continents might have
4 or 5 deg of warming as there is nothing there t keep them cool.
THe oceans have absorbed about 25% of all the anthropogenic CO2
released to the atmos is already in hte oceans. If we didn't have that then
we'd really have runaway warming. So we study C in hte oceans, that is anpther
of research. 90% of the heat trapped by atmos CO2 is also in the
oceans. The oceans are incredibly important
for capturing that CO2 and heat and burying it away.
The Argo project is measuring things in the ocean and trying to account
for that 90% of heat in the oceans. Oceanography started in about 1870
. To measure the ocean you have to be there on a ship. The first seriously
planned and organised expedition was on HMS Challenger
a 60m vessel , adapted naval ship, about 2300 ton ship, about
200 crew of which 5 were researchers , the rest were naval crew.
They were at sea for 2.5 years and they collected 263 measurements
of temp . About every 3 days they'd stop the ship and lower , with
steam driven winches, instrumentation.
In the 1990 we had the World's Oceans Circulation Experiment, we
had purpose built research vessels, typically 3500 tons , 25 crew and
25 scientists and over 10 years we got 10,000 profiles of temp.
The Argo prgramme since 2004 , its length is 2metres its weight is
about 0.03 tons and zero crew and we've collected 1.3 million
measurements in about 12 million days of operation.
With these measurements we can do what was simply impossible with
ships. I hoped to bring along a float but they are all packed up and
deployed at sea. Its a 2m long tube that drifts around taking
measurements. Such floats were developed mid1950s by a US
oceanographer. He could park it at a particular depth , what was otherwise
understood to be impossible , the engineering of the time did not permit it.
A Michael Conswallow? a British marine scientist , came in from othe r
fields of engineering . He'd not heard that it could not be done, so he did
it. His first was in the earlier labs, in Surrey, the pressure cases were
aluminium scaffolding tubes , 3.5m long. Bunged up the ends, put in some
batteries and some sensors. They contained a sound source , so it could be
tracked. No satellite tracking then. A matter of sailing out and listening
for the homing signal, and triangulating in.
The argo network ,at any one time is about 4000 of these floats ,
deployed since 2000 . By 2004 we had some everywhere on the globe.
By 2007 we'd reached our initial target of 3000 floats operating
worldwide. Each one stands alone, measures ocaen temp , salinity
which is important for ocean density and the current, the ocean circulation.
About 30 contributing countries UK,France and Germany, Canada,
USA, Japan , Oz, Korea India and about 25 other country small
These floats by having a small bladder at the bottom . Anyone familiar
with Scuba, you have a bouyancy jacket, pump air in and the diver comes up,
let the air out and the diver sinks. There is a rubber bladder, we use
hydraulic oil, rather than air as it must work at very high pressures.
When it is 2km down in hte ocean , 200 atmos pressure and air
won't do it. A long lead screw as an hydraulic pump , pumps oil
out into the bladder. There are batteries sandwiched around the side
and instrument payload at the top. Out the top is a satellite
antena, using satellite coms to get our data back. The float weighs about
30 Kg with about 280gm of reserve bouyancy, to move
between surface and 2Km depth. Quite some engineering, the batteries must give
5 to 8 years life, for bouyancy change, payload instrument running
and comms. About 30 D cells for that energy.
At the moment, argos are measuring temp and salinity. In the future
dissolved oxygen, nutrients, ocean acifdification , biological
parameters . Each dot on the shown plot represents as of yesterday,
an operational float around the globe, taking and reporting data.
Every 10 days the float , starting at the surface for reporting, will sink
to pre-determined depth usually 1km. It will drift for 9 days, dive
to 2km then rise to the surface and send its data. Repeating the
process each 10 days. Each one of the dots , a float was built
and someone had to take it on a ship to deploy it. 3,787 active
floats as of yesterday. Its been as high as 4,000. The instruments
have about a 5 year lifetime. Thy die off and we keep replenishing them
as fast as they are dying to keep the numbers up. There are different
designs but all are rather similar.
All the 2km floats are tubes, with antennas at the top. The differnt
engineering companies have solver problems in slightly different ways.
But all have the bouyancy control at the bottom and sensors
at the top. Up to 2007 we were deploying them faster than they were
dying. Its tailed off to a certain extent since then.
In the movie of dots around the globe, each new dot is a deployment,
and a row of dots is a ship deploying numbers of them perhaps once
a day. The tails indicate where the float is going. The clock time is
per year. 2002 a row of dots in the Indian Ocean for the early ones.
2003 there were floats all over the Pacific. Starting to get a global
temp profile. By 2007 the array was up to strength, consistent global
measurements and could make serious statements as to global
ocean temps. All the dots near Antartica are going to the right, as that is the
prevailing currents. At the eqator ,currents going back and forth,
Doing things autonomously allowed us to do things previously impossible.
Each profile costs about 7000 GBP taking the cost of ship and divide by
the measurements. In one month of our float program we've collected
10,000 more measurements at a cost of about 200 GBP each, as no
research ship costs. As the costs have come crashing down it
makes it possible to do this every month rather than every 10
years as before. A plot of the whole ocean observing up to about 2010.
Each 1 degree square box of the globe, counting how many observations
there were in that box, and a coloured dot for each obs.
AQ white square means that for all time up to 2010 there was never
an obs there. so majority had less than 5 observations in the first
150 years, not bvery good for determining how the planet is
changing. North Atlantic numbers good, north Pacific not bad
certainly near land. The southern hemisphere , huge gaps.
For the first 10 years of the float program , except for ice covered
high latitudes where our instruments don't go, a large number
per square almost everywhere. So now we can see what is going on.
Our floats don't mind bad weather . The ships we operate out of
the NOC, are tremendously expensive and so are only sent
where there is a good chance of doing any work, so never the
southern hemisphere in winter, as likely to do 2 days work in a month ,
sitting out a storm otherwise.
A plot of 50 years of August measurements in hte southern hemisphere
and almost no dots. Compared to 5 years of August measurements
with the floats and now plenty of obs. We can address these
critical scientific questions of how the planet is changing.
Its not just evolution but a revolutuon in our capability.
There is free exchange of data within the research community. Taking
of measurements and publishing them , was not commonly done before
Argo came along. Researchers previously made measurements and they'd sit
there in filing cabinets or laptops and simply not be instantly available.
Maybe 6 months or a year later, previously. All our data becomes freely
available within 24 hours. Comes in by satellite and goes out
on the internet within 24 hours. We're making salinity measurements.
The argo structure agreed to access to 200-mile exclusive economic zones,
thru international brokering. A lot of important ocean is within
200 miles of countries. Argo got permission for measurements to be
made unless a country opted out. This was absolutely revolutionary.
The engineering issues. Anyone could design a float that would work
once and it migh tcost 1million GBP. The challenge is designing a
robst one that could be mass-produced for about 15,000 GBP .
They have to operate about 4 to 8 years without maintainence.
There is no getting them back, its hard enough getting the ships to
plavce them in the water and no chance of ships going to
retrieve them , if its faulty. The mechanics and sensor payload has
to work for 4-8years without maintenence. You have to decide
what sort of batteries, lithium or alkaline. These days usually lithium,
they are more expensive and harder to work with but they extend
float lifetime a lot. So lithium primary batteries as higher energy
density , more MJ on board per battery Kg. The energy budget
has to be split between bouynacy changes , data telemetry and running the
sensor payload. The sensors have to be low power, you cant just
take laboratory kit , that can wastefully use the mains. You
can't really use acoustics, barely use optics because they both use
too much energy. So lots of passive sensor payloads. The sensors have to
remain operational thru the float life, place them in the factory and remain
in-spec , in-calibration working for 5 to 8 years. The data telemetry
must be capable in all-weather and all latitudes, so ordinary mobile phone
is out. These days we usually use Iridium phones , but you have to be able to
do that from about one foot above sea level in a force 8 and all times of the
year. Take readings at 10 to 15m spacings down to the pressure at 2000
metres is about 200 times atmos pressure. People say outer space isa
harsh environment. Outer space has cosmic rays and it is difficult to
get there but once you're there, its a relitively benign environment.
The deep ocean is hundreds of atmos pressure in a highly corrossive
liquid , a tough environment to place your kit.
Our floats are typicaly about 300 km apart, around the planet.
They operate on about a 10 day cycle. Measuring temp, salinity,
sometimes oxygen , other payloads are possible but very constrained by
the energy budget and engineering constraints and no control
over their location. They just go where the currents take them , some are
displaced by hundreds or even thousands of km.
Systematic global coverage means global problems can be studied,
a high data volume, downloaded 1.3 million profiles. Deeper penetration
than previous measurements. Before floats came along the options were
temp only for 750m , no seasonal bias ie not biased by being
taken in winter or summer. We can get there year round, so measure
monthly , seasonal annual or even longer.
The CO2 graph, the corresponding pic of increasing planetary heat .
From 1970 to the present, the increase in heat, measured in ZetaJoules.
ie times 10 to the 21. The heat absorbed into the planetary system , the
atmos component is barely visible. In terms of the physics of the
Earth's energy system, the atmos is irrelevant almost.
For us and agriculture etc, we feel that heat and it is critical
to us and have to understand the atmos. But as a physics problem
of energy in and out and where it is stored, its all in the oceans.
There was a big uncertainty back in the 1970s because the
data was very sparse. From about 2000 the uncertainty band
is much narrower. Because we've been measuring the ocean temp
directly. In earlier times to the question about whether or not
there was global warming , we could not be sure.
About 50ZJ but it could be anything from -20 to +100ZJ.
Now, there is no doubt , absolutely unequivicable, that the Earth
is warming up and we know at exactly what rate.
The remaining uncertainty, most of that comes from the deep
ocean that we are not measuing very well as yet. We're expanding our
tech, to measure the deep oceans.
I refereed to 1.3 million measurements.
A pic of one
Surfece and 2km down , temp and salinity profile.
Quite a lot of variability at the surface and a slow trend as descending.
So low down we don't have to measure too often but towards the
top , then a lot of measurements. One measurement , made once, of the 1,3 million.
Build together perhaps a hundred profiles and colour it according to
temp or salinity. Build up from those collections, then complete maps
eg one from the UK Met office using data from the floats, of the
southern Hemisphere, the temp at 1km depth. So possible to
build up pics of very remote places, from such data assemblages.
Operational uses for such data, in Oz, France , Japan , UK etc
assemble these data to assist forecasting. They are beginning to use it
for trying seasonal forecasting. Like how good will the coming
monsoon be. Using it for the research into climate change, global
sea heat content, sea level rise due to thermal expansion of the
90% of global warming takes place in the oceans. The air
does not weigh much, so it cant take up much heat, land is very
solid so hard for heat to penetrate. Oceans can move and mix and extra
heat can penetrate deep into the ocean easily.
An example of ocean forecasting. A vertical slice along the equator, in the
Pacific , the top few hundred metres, coloured according totemp.
Warm colours and cold colours, 2 plots in Jan and 2 plots in Feb
of 2014. A warm blob is moving to the right and by april
it is well over to the right and all the cool blue colours are displaced,
a big warming anomaly in the Pacific. This enables a forecast that
for that year that would be a significant El Nino event.
These are huge climatic events that result in flooding in
some places, droughts in othe rplaces, etc. Data from these
floats were able to forecast this , and governments could
plan for it.
The argo system makes changes for climate change detection much more
robust. So if you7 want to say how the world has changed from
1950 - 1960 , to 2000-2010. A grat change in the number of
measuremnts in those 2 decades and hence the sharp reduction in
uncertainty. In the earlier decade there were huge swathes where we simply
were not sure. Now we are sure, we've measured everywhere.
In 10,20 ,30 years time we will be absolutely sure and precise.
We know the global sea are getting warming and also know there ar e
distinct salinity changes as well.
Warming of the atmos is not a good estimate of global warming.
For a couple of hundred years, that was the best we had, as they were
the only measurements made. The atmos is a sort of indicator of what is going
on . A time series from 1880 to 2010 of global average air temp.
In the last 10 years it was not going up much and there was much talk
in hte press of a hiatus, global-warming was not happening.
The air temp goes up and down , but then only 10% snapshot of the
global warming pic. No wthe ocean record from 1960 to the
present. In those "suspect" 10 years of air temp, the period we were
getting the peak number of float profiles, the global sea temp was going
up still at the same rate as before. You simply cannot beat the
physics of the cO2 in the atmos trapping the heat. Where air and ocean
temperatures do sometimes go out of kilter is after a major volcano event.
So Pinatibo, El Chichon, big events that put so much dust in hte
atmos, that it acted as a barrier, like painting your greenhouse windows
white. In such events everything can cool down, but its just a blip
on the underlying trend.
What we'd really like to know about is ocean rainfall. We have the
hydrological cycle. 95% of water evaporation is over the sea and most
falls back over the sea. Some evaporates over land and some
falls back over land. What we whant to know is the net result of
the amount coming out of the sea and falling on the land as that is
what is significant for floods , droughts and agriculture etc.
Its hard to measure everywhere the amount of rainfall and the
amount of evaporation. But where water evaporates, the ocean left
behind is a bit saltier, and where it rains , the ocean becomes a bit
less salty. So measuring the salt concentration you can discover
where there is lots of evaporation and where there is lots of rainfall.
Map of average ocean surface forcing, 33 to 37 parts per thousand.
The surface atlantic is a bit saltier than the surface of the Pacific.
A map showing where the ocean tends to evaporate , like the
north and south Atlantic , and so are relatively salty compared to most
of the world's oceans. Now a map of changes, the trend in saltiness.
As measured long term, where its salty its getting saltier and where less
salty its getting less salty. That means the process of evaporation and
rainfall is speeding up , thats how we interpret this.
That means , for the land, the wettest bits will flood more
and the dry bits will have more droughts, becuae of the speed up of the
The Argo system allows us to determine changes and sooner and with
greater confidence. Autonomous platforms are now the dominant source of
data for describing the oceans. We will now extend into the deep
ocean and taking other measurements.
The Discovery ship based in Soton , set off to the southern ocean a few weeks back.
To get to the ocean deep, the tube design will not work down to
600 atmos. The walls would have to be so thick, it would be too heavy to float.
You have to go for a sphere design and the material is glass. Counter-intuitively
a glass sphere about 1 inch thick , if perfectly made , can withstand
600 atmos of ressure. If you could mill them out of metal that would be
fabulous , titanium say, but milling out of a block titanium would
be a couple of million dollars per float. Wheras glass spheres are about
2000 dollars. Made in 2 hemispheres and pushed togethere with
So Deep-Argo getting to the lowest half of the ocean . They are currently
being made and tested by several groups. A new instrument package is
undewr parallel developement. This also has to survive higher
pressure and hte demands and accuracy are about tenfold increase
over upper half floats. We need to measure ocean temp to
about 3millidegrees accuracy and unattended for 10 years without
any drift . We've been recently tasked by the G7 science ministers
to come up with a proposal for a deep measurement programme.
It would cost about 25 million USD a year. The existing programme
costs about the same. It sounds a lot but divide it between 7 or 8
major nations to answer the fundamental question, how much
glogal warming is happening. How much is it worth to our nation
to know exactly how global warming is working , worth 2 million
dollars to the UK?
Oceanography like a lot of technologies , follows on from new
technologies, when suddenly you could do things that
you could not do before. Over 40 years the Argo system has
much improved our data gathering and is the only technology
to provide global coverage. In the next 10 years its likely
to move to deep , involving biochemical measurements, oxygen
, ocean acidification and things like that. Its the key data-set for global
change studies . The current NOC scientists will be the first
generation who will actually be able to describe human
impact on climate , completely and without doubt.
Our generation has made the tech to make comprehensive
measurements, but w eneed to do that for 30 years , and the
next generation will be able to describe with very narrow uncertainty
what is going on.
CO2 amount is still going up. It is worse than I thought it was?
Arhenius in 1896 said , what if it doubled, that would make a big
difference, the Earth would warm 4 degrees. He worked thsat out
as a theoretical construct.
to be continued
Please make emails plain text only , no more than 5KByte or 500 words.
Anyone sending larger texts or attachments such as digital signatures, pictures etc will have
them automatically deleted on the server. I will be totally unaware of this, all your email will be deleted - sorry, again
blame the spammers. If you suspect problems emailing me then please try using
my fastmail or my fsnet.co.uk account.
If this email address fails then replace onetel.com with fastmail.fm or
replace onetel.com with divdev.fsnet.co.uk part of the address and
remove the 9 (fsnet one as a last resort, as only checked weekly)
keyword for searchengines , scicafshadow, scicafsoton, Southampton Science Café, Café Scientifique, scicaf, scicaf1, scicaf2
, free talks, open talks, free lectures, open lectures ,