Monday, 14 Nov, 2016, Dr Paolo Cipollini , NOC Southampton:
The use of satellites to monitor sea levels.
18 people, 1 3/4hr
It might sound asking too much to measure sea levels SL from way up
there in satellites S, but we can do that.
Is the sea really flat. And does SL remain constant. The sea is not flat, there
can be big waves. Wind blowing over the sea surface creates waves but
when it calms down, the level will flatten out to the average of the
waves previously. Create waves in your bath and when theyy stop
the water level is just the same as before. There are tides . The tides in
Soton are 4 to 5m between minimum and maximum heights.
Again the average of the tides is zero. A place like the Bay of Fundy
in Canada, 16m of difference in height there. Again there,
the average of the tide is zero. In the long term these changes don't
count for much. There are other effects, bumps and troughs in SL ,
due to currents. Where you have a current there will be some
signature of that current in the SL. Its not the simple case
of where there is a current there is a bump. It relates to a slope
in the SL. If you could look at the large scale of the sea surface of 10s to 100s
km, you would see these bumps and troughs.
Its the same concept of highs and lows in atmospheric weather maps, is
what generates winds. Winds don't go from the high to the low , as you
would intuitively think, but tend to rotate around the highs and lows.
In the ocean , the same effect bumpsa and troughs are similar to
the highs andlows of the atmospr=here. If I say the currents don't
change, they may well do over long time scales, then the mean SL should
not change. We've now been measuring SLs for decades if not 150
years or so in some places. When you look at the global
picture of SLs, the regional SLs have been changing.
If a current changes somewhere , then the SL in one place will tend to
go down and in another associated place, go up by the same amount.
But its also changing globally, the global level of SL, what worries us the
Along the coasts we have tide guages, originally placed around, to study
the tides. After a while , months or years, by looking at the data , the
tide becomes really predictable. Essential for shipping and
navigation. There isanother thing you can measure for tideguages set
in one place for a long time , filtering out the daily up and down
of the tide , and see over years whether an upward or downward
trend in SL, or stays flat. So we can look at long term sea-level at these
guages. NTSLF tidegauge network of the NOC , Liverpool site.
There is a global network of tidegauges, GLOSS. The Newlyn gauge
has 150 years of data, but some others , developing countries ,
perhaps only for 5 or 10 years. Trying to put all that together to
find a global mean , has to be done with a pinch of salt ,
as different weightings for different stastions. For some of them
there was a war over some of the period. A gap of perhaps 10
years and lack of cross-calibration of the new siting, a lot
of practical problems in trying to assemble a nice plot.
Where there are no islands, we can only use satellites.
From tide-gauges we do see the SL changing.
A map of Europe and N America , with up or down arrows
along the coasts with gauges. They show the local trend in sea-level
, measured wrt a local reference. Sweden has down arrows , the
relative SL, in that region, since the end of the ;ast glaciation
, all covered in ice, the Earth crust is still bouncing back.
The crust does not flip back , with no overburden, in a few
minutes, its taking hundreds of years to return.
Trying to account for all the local land movements , which
we can do with GPS coupled to tide-gauges. The GPS
system works on an absolute co-ordinate system, so it can tell
you whether the coupled tidegauge is going up or down in the
We've seen over the last 130 years, using old tide gauges, new ones
and plugging in S altimetry data in the last 25 years, we see
a clear increase. Some people see an accelaration in global
SL rise, still under discussion , but consensus is going that
way. TRhis is worrying because you have to project this
situation into the future for designing coastal infrastructure
we nget , for mean SL for the last 110 years of about 25 cm.
This is a global average. When you use tide gauge data from the past,
perhaps old instruments, lacking calibration , or an operator taking
readings only daily, there are errors, giving an uncertainty to
the data. The error bars get narrower with modern equipment.
Even accounting for the uncertainty , it is rising, perhaps 1.5 to 2mm per year
in the 1900s and now more like 3mm per year in the last couple of
The launch of Sentinel3, by ESA in Feb 2016, that i work with.
On board are different measurement instruments, particularlya
radar altimeter, the data I work with primarily.
Aeroplanes have altimeters for measuring height relative to the
ground, similar principle for us. The altimeter measures the height from
the sea surface to the S, quite accurately.
It orbits the Earth and in one orbit the Earth will spin under it.
The result of 10 days of orbits is like a mesh pattern.
Interpolate the mesh points to produce a nice image.
Filling the gaps with some maths filtering. In red there is the peaks
and blue the troughs of the sea surface. The light blue depressions may be
10 to 20 cm and dark colurs 20 to 30cm. So a variation in SL
of about .5m. This image shows the effect we call El Nino .
So not just a rise in sea temp but even the SL goes up.
With now 25 years of data we can see what local SL change has been
in different areas. So we can produce maps of the rate of change of the
On one side of the Gulf Sea is going down , on the other side,
its gone up. In the eastern Pacific the SL has gone down but around
Indonesia gone up, over 20 years of dat aits gone up 7 or 8cm.
Since 1992 we've had about 10 altimeters in orbit.
If one stops working, and a new one is launched , we can
cross-calibrate with the old data. Al ot of work inter-correlating
all these mission data. ESA has 2 S, US navy , a couple of polar ones.
There is an annual cycle in the data of about 15mm.
A mix of the effect of snow and thermal expansion perhaps.
The peak is in Sept/Oct, and thermal expansion effect should produce
a mimimum then. It is due to water storage on land, as water or ice.
Sept /oct is when most of the glaciers , mostly in hte
north hemisphere offset by only a few in Chile and NZ.
Plants taking up water in the summer months?
That will have a signal due to tree canopy storage. Again most of the forests
are in the North Hemisphere so you'd expect a minimum then.
The signal that dominates is the land based snow and ice.
So SL rise is due to accumulation of greenhouse gases from
climate change and the Earth is warming up.
There are 2 mechanisms there, 1 is thermal expansion of seawater ,
the other is overland ice has been melting proportionaly more
going int othe sea.
For that 20 to 30cm rise from the 1800s, these 2 effects have contributed
about the 2/3 and 1/3 between them. For the future , extra increase
from thermal expansion but the major problem will be from
melting water over land. Melting from Antatica land and Greenland
will be the major contributor in the next few decades.
The IPCC assessing the science and reveiwing everything continuously
for their reports on the state of the planet.They've been compiling
the results of different reports based on different models.
A precautionary approach is to run different models and describe
the results in terms of probabilities.
So we are not saying the like of SL in 2100 will be 87cm higher.
It is a spread of values , giving probabilities.
Another IPCC report is due June 2017.
What SL rise will be in say 20 years is dependent on what we do in the
way of burning fossil-fuels. So the IPCC quantifies a few
differnt behaviours. Go for a totally green economy and
not burning fossil fuels, something in-between taking a couple
of decades to change our behaviour wrt renewables etc, and the
other is continue as we are.
Continue as we are scenario, by 2100 end up with 70cm rise. There are huge
stretches of the world where people are living at the level of current SL.
Think Venice or Bangledesh. All those people will have to relocate.
They can currently cope just about with current storms but
not those storms on a 70cm background rise.
Bear in mind there may be an acceleration in the melting over Antartica
and Greenland. So a non-zero probabilit yof over 1m in 2100
for that scenario. Lots of people around the world are working on
this trying to perfect their models to hopefully converge on
some figure. There are things we don't know well enough.
One major thing we don't know well enough is the melting rate of
Antartica and Greenland ice. We do know pretty well how much
water is locked up in those locales i nthe form of ice because people
have sounded through it and now where the bedrock is
and is known to good accuracy. For more immediate concern it is
the ice melt over Greenland. There is water enough there to increase
global SL by 7m. The upper floor of this pub where we
are nw is probably 3 or 4m above SL. Then 3 m on top of that
and we've drowned. Along with all the other coastal cities of the world.
For antartica there is about 55m of water locked up there.
Loads of islands in the Pacific would completely disappear.
Pu ta storm surge on top of current SLs can =be coped with
but add a metre.
Hurricane Katrina broke local sea defences. In the UK we had a bad
storm surge 1 Feb 1953, which resulted in the Thames Barrier being built
and has to be closed quite often to defeat storm surges
100cm of mean SL rise and Venice goes, a normal tide of 100cm and Venice floods, so that elevated water level all the time would be impossible.
Amsterdam , Hamburg , the sea-front area of Los Angeles and San Fransisco
is low 1 to 5m above sea level, so those would disappear.
Most of New orleans is low, flooded with Katrina.
London is a bit higher up at 8-9m , but if antartica starts melting
then vulnerable then.
SLs are rising , with great confidence. So we need to start acting
now , for the sake of future generations. We need to mitigate the
effects of our economies on climate and SL, so need to reduce fossil
fuel use, increase enrgy efficiency, adopt renewables. But already
we have to adapt in terms of building new coastal defences.
We need to rethink how we use land, build away from the coasts
and accept that we will have 1 to 2m of SL rise.
A digression on the recent phenomenon of, for 35 years the Arctic and
Arctic sea-ice global total for any day, compared to the long-term
average for that day, was surprisingly constant, until the middle of October 2016 when it went totally off the scale and eventual record global minimum on 22 November 2016. A climate change tipping-point ? Everyone agreed geography, oceanography etc was very different at the 2 poles but no one could say why a fixed scale of +/-3 million sq km was sufficient for 35 years, in fact for 25 years +/-2 million was sufficient scale for this Dies Anno global anomaly metric . A different use of satellites , for remote sensing.
1:03 to 1:15 , I could transcribe if there is interest .
Satellite altimetry - how we use it over the open ocean. At the NOC
I like to push the techniques . We would not usually use it in the coastal zone,
but it is important so we are trying to remove the technical obstacles.
These altimeters don't work well over land , so no use for monitoring
the recent NZ or Italian earthquakes. The basic principle of alimetry is
you put a saltellite up there , it sends down a radar pulse , a ping
of e-m energy to the surface of the sea. The signal bounces back
and the satellite gets the reflection. Knowing the time and speed of light
you measure the distance. If the sealevel is lower ,
it will return later. You know the orbit , precisely , from a numbe rof
different technoques. One intuitive on is to have a GPS receiver on the
satellite. So you then know how high the satellite is wrt to the centre of the
Earth, take the difference , gives the sealevel.
What makes it challenging, is we want it to be very accurate. The bumps or
troughs on the surface may only be 10cm, I want to be able to resolve
a cm of sea-level change over say 10 years. I want an accurate
measurement from a satellite that is 1000km up there. So you
have to account for every source of error, every strange effect.
Send and receive signals have to go through the atmosphere, so there is
water-vapour, different gases, free-electrons in the ionosphere, a ;lot of
things that make the speed of light not quite the speed of light.
I'm proud to say that with hte latest generation of altimeter satellite
up there , I can measure a disc of just 5 km across, with an
accuracy of 2-3cm. When you average all those millions of readings across the
whole globe you get a really accurate measurement. Accounting for the
error sources, you can get to mm precision, and you can observe a
global sea-level change of 3mm per year, perhaps only 2 or even 1mm
per year. We are down to that level of precision.
We get those pings back to the instrument, and by somehow decoding
those pings, we get extra information about the sea.
We fit a mathematical model to the rise and fall of that waveform
and from that can determine a number of different parameters.
Looking at the location of the waveform in the time sense, I get the
height of the SL. From the shape of the ping, if the sea is very flat,
the ping rises very quickly. For a rough see it rises more slowly.
So from the slope of this rise its possible to determing the significant wave
heights all around the globe, just from the altimeter returns.
The overall power, the amplitude of the peak response is related to the wind
speed. So I measure 3 things with same instrument.
Altimetry is long-tacks. Images from satellites we are familiar with.
Altimetry is differnet , the S goes around a track and the measurement is beneath that track. So the data is just a file of the returns along that
track. So in say 10 days , there is a mesh of results, but not in between.
For a nice 2D image, then some interpolation is required.
I'll highlight 2 missions, Poseidon 3 the name of the instrument
on Jason-3 and Sentinel 3a. Those have been this year, plus the past
ones and many more lined up for the future commissioned until 2030.
We've been building up the data record since 1992.
Its a mature technique now for oceanography.From altimetry you
get maps with the bumps and troughs in the ocean surface.
Varrying between + and - 30cm or so. For a complete world map the
gathering time is 35 days. So these maps are used in combination
with gravity measurements to build up the absolute dynamic
typography, accounts for the major currents. The complete range
of heights is about 2 to 3m . Where there is a major current like the
gulf stream, or off South Africa, you have big changes , there are large
changes in the height. But the current is not where the bump is, but
where the slope is. Much the same conceppt that you see on
weather maps. The winds are not where the highs or lows are,
but where the isobars are closer and bigger slope in the pressure.
The same effect in the ocean, called geostrophy, the balance
between pressure gradient and Coreoleus Force, because both
situations are on a rotating sphere. A complicated reference structure,
the apparent force, everything that moves on a large scale, of > km
scale will tend to steer to the right on the northern hemisphere and
to the left in the south. At very large scales we can assume it is at
equilibrium . With a high in one place of the ocean and a low
in another place, pushing water over, and the Coreolus Force ,
means the water rotates around the highs and lows, moving where the
slope is highest. So end up wiht a map of current, and flipping
between the 2 images, highlights the major currents.
So from altimetry we see the bumps and troughs and then the
currents. In the 1990s , designing in the 1980s, that was the
major objective. At that time there was less importance laid on
knowing global mean SL. It worked very well and we measure the
global SL rise as well.
Movie of a current off Japan, the height of the sea as measured by the
altimeter. One side is higher by about 1.5m , the other side 0, and
in between is the Kuroshio ? Current , highlighted by arrows
by looking at the gradients. We see the current , but also eddies and
other things apparently moving back westward, as well as the
mainstream with the meanders. This is anot a simulation but
directly from the measurement data. In 100km there is a drop
of 1.5m. The same effect with the Gulf Stream , over 100-200km
about 2m and where the maximum slope is, there is the main body of
the current, again real data from the altimeters.
Excluding a few early missions, lacking requisite calibration, the later
missions are quite consistent, giving 3.1mm per year as the mean SL
rise globally. Within that you can look at the trend, region
by region. Around Indonesia SL is going up fast, about 1cm per year.
Around California, going up but not as much and changes in the
currents. For the Gulf Stream , the southern side of it is going
down and the northern side going up. On average the current is
going slower, less transport .
We are moving these kind of measurements closer to the coast,
for a number of reasons. When the altimeter baem impinges on the
coast , the return p[ings are not easy to interpret. They get
corrupted, and when land intrudes on the footprint of the
altimeter, there is more work to be done on interpreting this.
Over the last 13 years we have a group of about 100 colleagues
working on this around the world, working on coastal
altimetry. Its a good thing we've 25 years of data to
work with. There are many stretches of coast in remote regions
that have no tide gauges, no instrumentation. Or if there is,
then only placed there , in the last 5 years say.
Altimeter returns start getting corrupted perhaps 10 to 30km
from the coast. We always compare S and tidegauge readings ,
and there is good agreement , despite not measuring in the same
place. Conventional altimetry stops 30km off shore
but tide gauges are right on the coast.
People will be using this coastal monitoring to explore
coastal currents, for sediments emerging from rivers, or for
disaster management such as will an oil spill affect land
or swamp or tidemarsh areas. A killer use for it will
be studying of storm surges, resulting in coastal flooding.
Storm surges ,like Huriccane Katrina are among the most
deadly of natural phenomena. Images relating to remote sensing
observation of Katrina, the bathymetry , the shelving to the
coast. There was an overpass of a Chinese altimeter ,
that evening showing flat until getting to the shelf and then
from the altimeter data you see the big piling up of water.
The S passed perhaps 100 Km away from New York itself, but
the surge was apparent far away, about 1m where it
was observed. Such a pass was adventitious unfortunately, not assignable.
If lucky to have such an overpass then the profile of the surge becomes very
obvious. very important for peiple that produce models for predicting
storm surges, can check the validity of their models. Improvement
in such models will give better warning for areas that will be flooded
and those not flooded , evacuate people or not.
In 2012 Hurricane Sandy flooded New York .
In 2015 there was the Santa \claus storm over the North Sea. In the Danish
Straits , we had an overpass of an ESA altimeter, where the surge was.
The image shows the profile of the SL . You see the piling up of water for
a storm surge of about 1.5m . Comparing this with the model result
produced by Danish colleagues , for the same time, good agreement.
Strong surge one side of some islands and lower on the other side,
a good validation check.
We can look at the finer scale of mid-ocean. So th eGulf Stream
, sea-surface temperatures, chlorophyl from mid-ocean sensors.
Fine scale details, important for biologists, from alitimeter data. There are
interactions between the physics and the biology that are important
for life in hte oceans. local upwelling of nutrient rich waters from the
deep that stimulates growth of phytoplankton , the base of the
food-chain in the ocean.
Another image, off South Africa, sea surface temp , a chlorophyl
map of interesting things close to the coast. Places where cold water
is coming from below , near the coast , stimulating growth around
Another mission , to be launched by Nasa in 5 years Surface Water Ocean
Topography Mission. It has an alitimeter but also 2 long beams with
antennas, using interferometry , also sending pulses to the same
surface of the ocean . In a sense they get a stereoscopic view, giving a
measure of the relief, the topography of tyhe ocean.
Another clever technique we are moving to use GPS tomeasure SL.
We put a GPS receiver in space and we receive the reflection
of GPS signals off the sea surface. The GPS Ss are about 20000 km
up. They are there already, the signals there already, all the time.
They also bounce of the sea. So with a lightweight cheap receiver in spavce
you get these reflected signals and also the original
direct signals. From triangulation I can start estimating
where the sea surface is, and its height. Not just for height
but also has info on the sea state, for waves and wind.
This is promising as its light, relatively simple receiver, it is
passive just receiving what is already there. It does not have
to generate its own pulses, so low power requirement.
So 8 or 10 on the same rocket , spread them around in space,
will get a lot of measurements. On 12 Dec 2016 Nasa will
launch a constellation of 8 GPS receivers , the main reason for this
Sygnus mission is to measure winds in huricanes. Hurricane forecasters
need wind info , for near the eye of the huricane, to predict strenght
and direction of travel. I have a personal interest in using Sygnus
for doing SL measurement.
With relatively low launch weight, relatively low cost mission,
we hape to get a lot of measurements.
So we can take the "pulse" of the planet , monitor the oceans
for SL and climate change.
What is the wavelength of the radar?
For altimety we use Ku band 13.6GHz, which is not too affected by things in the atmosphere, but we still have to compensate for some of them.
It allows good measurement precision , the return is nicely
sensitive to the wind, so gives a nice estimate of the wind.
One of the missions called Altika? , Ka for Ka band which is
36 GHz a French instrument on an Indian S. More precise in its
measurement , it is likley to be more sensitive to rain in the
atmosphere so trickier to use in the likes of the tropics. In practise
we're stunned by this instrument, it works extremely well , we can
compensate reasonably well for the problems we get.
My guess is in the next decade or 2 , most altimeters will be using
that range. Usually we make altimeters with 2 frequency ranges.
By looking at the difference in return signals , you can compensate
for the effect of the ionoaphere. The free-electrons there, have an impact
, that very much depends on frequency , at 13.6G every few pulses
sent down , youy send down a pulse at dsay 3G . By looking at the
differences you can estimate the effectof the ionoaphere
and correct for it.
A lot of interesting radar aspects in altimetry.
I can see with a swell going through an ocean that you radar,
in a biased fashion, will pick up the concave troughs rather than
the convex peaks?
Yes we have to compensate for that. Another correction of the many
that we have to apply. Its called the Sea-State Bias. Due essentially to 2
things, 1/ what we are trying to measure over the footprint of the
altimeter for one of its pulses. We are trying to see the mean sea surface .
The return signal is not really the mean but more related
to the median of the sea state. So the half-point of the distribution
#of heights. Because the wind-waves are not perfectly sinusoidal peakier with
flatter troughs , swell tends to be. 2/ in pure electromagnetic terms,
for the peaks , the energy is factored away so we get a lower intensity
return, compared to the flat bottom of the troughs. Without
compensating for that, the altimeter will over-estimate the range and
a lower SL than actuality.
In the error budget, error uncertainty, for altimetry the SSB ,
as of today is perhaps the most important in the modelling.
Variability of temperature , humidity etc of the atmosphere are less of a
There is a correction you have to do for the gases in the atmosphere,
O2 and N2 . In terms of range that correction is of order more than 2m .
But we know how to model it very accurately , this correction is
extremely sucessful and we can remove it almost completely, with
a residual error of the order of mm. SSB effect is smaller in
magnitude 10 to 20cm but we cannot model it to such good precision.
The residual error for SSB of about 2cm is a major contribution
to the error budget.
The poster for this talk, shows the 200 yearold tide gauge record
on Brest Peninsular in France. Could you explain , 200 years on the
same spot , new gauges presumably ,but no one has messed about
with datums. For the first 100 years its virtually flat. I know that in the
last 30 years or so if you look at the BODC tide records for the UK,
Lerwick shows 30mm of mean SL rise but Portsmouth shows
170mm , from the isostatic rebound effect. Lerwick rising
and portsmpouth sinking, presumably for the UK the average is somewhere
between those 2 numvbers, for SL rise. Brest is not that far from
Pompey , and pompey has not suddenly started sinking presumably,
so what does that flat bit of Brest tide record represent?
I would say this plot is fairly representative of what
has happened to the global ocean level. There may be local
land movements for this particualar site. Then it starts increasing more
rapidly. Interpreting a single tidedegauge is a work of art because
, only recently have gauges been fitted with GPS.
What was that gauge doing in the 1800s, you have to make a lot
of assumptions. If in one area you have a rate of isostatic
rebound, perhaps it was pretty much the same in the 18C, but then
there are other things. Like in the vicinity of towns or indusstrial or agriculture
areas, in the 19C coukd have been pumping out a lot of water
from the watertable . In Venice there was a big
problem in the 1940s/50s/60s because a huge petrochem plant
, close to Venice, extracted a lot of water, used in the plant.
Resulting in a lot of subsidence. Very few of the current crop
of tidegauges have been calibrated with altimetry.
Not used because of the cost and many would not have the land vertical
data as accurate as you would need for multi-gauge site comparisons.
We've looked inot this with accuracy and you still end up
with discrepancies that are difficult to explain.
Even if GPS is used at a gauge site, its only for the last 10
years and you don't know what was happening before.
So altimeter based SL rise globally agrees with the tide-gauge
records but when you look regionally its difficult to get a
good match. With coastal altimetry we are trying to fill that
gap, at the same time , our tidegauge clooeagues are
getting more knowledge of local vertical land movement.
Every country uses a different reference system for tides,
usually a local reference. UK has one , Italy has one.
It seems sensible to bring all these separate sytems into
one international reference frame IPRF ? which we should do
if we want a global comparison.
Monday, 12 Dec, 2016, Dr Catherine Mercier supported by Dr Frank Ratcliff , both of Wessex Academic Health Science Network, Southampton:
The 100,000 Genomes Project , focus on rare disease and cancer.
1 3/4 hr, 27 people
Wessex Academic Health Science Network is a facility for driving innovation
forward into the NHS and DR Mercier is a clinical geneticist at the Soton
General Hospital. The talk will be about The 100,000 Genomes Project and the wider question of whether you'd have your genome sequenced.
Dr Catherine Mercier
I believe we're in the midst of a revolution. If you think of thr industrial
revolution, it did not happen overnight, taking about 100 years for
the changes to come to the fore. I think there are similiraities with the
genomic revolution. 1953 the structure of DNA was discovered by
Watson , Crick and Franklyn.
I believe a doctor in 2050 will still be looking at their
medical records, blood pressure , what medications the're on
but also on the screen info on the person's genetics or even their
genomics , the entirety of the DNA, the coding and non-encoding parts in-between
and how they interact. So in my career I believe genomic medicine will
become much more main stream.
We are made of about 3 billion cells and within the cell is the
nucleus which contains 23 pairs of chromosomes. Take a single
chromosome and unwind the DNA, and along the string is
gene after gene. Genes are important as they encode for proteins ,
essentially we are all made of different types of proteins.
The gene is the smallest unit of heridity , about 20,000 in the
human genome. An onion has 4 times that number. I'm a clinical
geneticist , I see individuals and families , who have a conditipn
we believe to be due to an alteration in their DNA.
Perhaps a mistake in a single gene. Perhaps a child with multiple
congenital abnormalities , prhaps heart abnormality . Someone with
absent thumbs I saw recently. I have to try an find an underlying
cause why those abnormalities can be found together.
I also specialise in cardiac genetics . I look after families
with hypotrophic cardiomyopathy , abnormal thickening of the
heart walls and heart muscle pump fails to work as well
as normal and predisposition to abnormal heart-rythyms.
The sort of cases where an apparent fit and healthy footballer
collapses on the pitch. Often they are inherited cardiac
diseases. So a family member says it happened to my brother,
what is the chance of it happening to me. Will another child
be affected in the same way as a first affected child.
I think about DNA and chromosomes. A child might have
a whole extra copy of a chromosome like trisomy-21 or Downs
Syndrome. When I started genetics, the best way of looking at
somebody's DNA was to look down mucroscope basically.
If you look at a cell at an appropriate stage of division ,
you can see the chromosomes. You can see whether there
is an extra one or one missing or even perhaps a chunk
of one missing, causing the diagnosis.
So to look at greater resolution, gene by gene I would have to look
at the patient and think which of the 20,000 genes might have a mistake
in it that is causing the problem. Thinking of them 1 by 1 and I'd
send them off for Sanger sequencing. The answer might take 3 months and be
a yes or if no then go back and rethink, which was the next best candidate.
A very time consuming process and all the time families are waiting
for an underlying molecular diagnosis.
Recently the tech that allows us to look at DNA has changed unrecognisably.
Insted of looking down a microscope at relatively little detail a twhat the
Chromosomes look like or 1 gene at a time, I can now ask for the
entire genome , all 20,000 genes and the DNA between, to be sequenced
and done in about 48 hours. That is the same test that took the
human genome project 10 years to do. The first sequencing was an
international collaboration , nillions if not billions of pounds .
We can now get that data overnight, that is why things are happening
fast in the world of genomics, the tech has changed so much.
The cost of sequencing the genome.
15 years ago - 100 million, we now talk of the 1000 dollar genome.
I used to put out my rod and fishing line and ask for alterations in
one specific gene and get 1 result back.
Now when you sequence an entire genome , we've all got alterations
in our genome that make us human. Part of my job now is to sort
out which of these genetic variants are disease causing and which are
just part of normal human variation, that makes each person unique.
So in some ways my job is easier as we can sequence more, but also
harder as there is much more interpretation. It does mean its an
exciting field to be part of.
My work at the hospital is with patients who have had years of
investigations, perhaps initially as a new born. Some we've been
seeing for 10 year sand we still don't know whats causing
their problems. We know its likely to be genetic but not now exactly
what the gene change that is to blame.
There is a support group called SWAN, Syndromes Without A Name.
They are parents of children of whom doctors can only say ,
I'm sorry I don't know what this is, I don't know the name of it,
I don't know what the recurrance risk is. That is very isolating for a
child with disabilities, for who no explanation can be given.
With technology change and families out there who badly need
a genetic diagnosis, in 2012 ther ewas the launch of the 100,000
genomes project in England. A government funded project through the
NHS , its not research and its not as such, mainstream medicine.
Its whats called a transformational project. We are working on
having genome sequencing being incorporated in the mainstream of
the NHS, hopefully as a legacy on completing this project.
We're sequencing 100,000 genomes, its not quite 100,000
patients . The project is split into a rare disease arm , the sort
of patients I see, but also a cancer arm.
We bid at University Hospitals Southampton in a competitive
bidding process and chosen as 1 of 15 hospitals to
host a genome medicine centre,in 2015.
As well as hospitals being involved, there are also industry partners
as its realised that it won't be the NHS that goes on to drug
developement for example . This will run parallel with all the
extra data we are creating. The NHS is not resiurced for developing
new medicines, so there are partners in private industry also.
Its not just UHS area we are recruiting patiens, but around Wessex,
Portsmouth, Basingstoke, Winchester and perhaps Bournemouth.
The most important thing for me as a doctor is I'm hoping that many
of my patients that I know to have an anderlying genetic diagnosis ,
but have not been able to find it. I'm hoping that for them , if enrolld
in this project, they will get an answer. It is important that the process
is transparent and involves a clear consent process.
We spend about 40 minutes with a patient at the outset explaining
what it means having your genome sequenced and chances to
answer their questions. We are about the first healthcare system in the
world going about this. I was talking to some colleagues at a
recent German conference and they said they could never do this
as their healthcare systemcis not joimed up enough, we could not
get the right people to talk to one another. But with the NHS, the data
would be stored centrally and hopefully the benefits will
be huge. We hope to find some new gwnes along the way. Theyare always
there , its just we don't know what they do. Every week the scientific
literature gives the name of a gene and what it does. Is it a cause of
intellectual disability or some unusual familial condition.
Hopefully with all this data will come along a lot of medical
insights. Its possible we will start to stratify patients , according
to their genome. About 15% of hospital admissions have an
adverse drug reaction involved at some point. If we could find out
what it is about a person , that causes a bad reaction ot a drug,
and not give them that drug. That would save significant morbidity
and also save money. There is a particular HIV drug , 5% are
super sensitive to , and if you have that sensitivity , that
genomic signature, that medication is not used, so this
is already happening.
The project is also being used to stimulate the UK genomics
We are hoping for patient equity across the country. Every patient
with a rare disease, or a particular type of cancer, has access into
this project. Half the project is recruiting people with
rare diseases . A rare disease is something that affects less than 1 in 2000
people. You may not think that is a huge health care burden but there
1000s of rare diseases, so many that 1 in 17 of us will
have a rare disease of some kind. S oat least 2 people in this
room. Look at rare diseases as a whole group, then they ar e
pretty common, so an important healthcare burden.
80% of rare diseases have a genetic cause and genetic diseases are
stil lthe largest cause of death in the first year of life.
The other half of the project is enrolling patients with various
Cancer ,essentially, is due to DNA errors. We're born with our germline
DNA , there are certain cell-lines that will continue to divide through
life, such as skin or gut or lung cells. The instructuons telling the cells
how to divide is in our DNA. But if you accumulate mistakes in your
DNA , by too much sunlight or cigarette smoke or poor
diet , then the instructuion manual is damaged and poorly
regulated cell multiplication can result and a tumour.
If we can learn the genomic signature of those dividing cells, we
can much more exquisitly target treatment. Again this is already
beginning to happen. Non-small-cell lung cancer , we routinely
look for mutations in the EGFR ? gene and stratify treatments accordingly.
At UHS we're including patients with breast, prostate , colon and
lung cancer. We take DNA from their germline and compare it to the
DNA in the tumour, with the mistakes in it. Thats why the 100,000 GP
is not quite for 100,000 people , because people in the cancer arm of the project
will have 2 genomes.
If we have a suitable patient , see the patient in clinic, go through the
consent process in detail, take DNA from the patient . If they appear to
be the only person in the family affected and seems tobe a recessive disease
, we take blood from the patient and their parents. Basically its a complex
spot the difference puzzle. With dominently inherited conditions
we want to get as many samples from affected family members as possible.
Then spot the genetic change that tracks through a family withthat
disease. For the cancer arm , a blood sample and a tumour sample DNA.
We also need lots of patient medical records or data , because interpreting
those genomic variants , is impossible without knowing what kind of
job is done by the gene that has the mistake in it. So we have to
marry up patient details and DNa samples . That involves inputing data about
people's medical history. The DNA is all sequenced in a super-factory near Cambridge.
The Sanger centre, the results fed back to our lab in Salisbury .
Thenv the doctors involved with recruiting patients wil lbe involved
in partial interpretaion of results and feeding it back to the
families. We are hoping that the diagnosis rate will
be about 25% of rare disease. That sis quite abig uplift as many
of these patients have had numerous investigations before.
The main piece of info back to a patient will answer the diagnosis
question, such as why has my child intellectual disadvantage.
We also give patients the option of additional findings fed back to
them. Such as gene changes we know to be associated with
other separate diseases but for which management is available.
Such as if a high risk of bowel cancer gene or breast cancer gene.
Feeding back such info , only if that is what the patient would
like and only if its an actionable condition. So not incurable
neauro-degenerative disease s , with no known treatment.
If parents are considering further children , they can opt for
carrier status of certain conditions fed back, such as CF or
some X-linked conditions. This is determined in the consent part
of the enrollment.
Dr Frank Ratcliff.
Prooject is about building a UK genomics industry, building
on research so we can link medical records to genetics and outcomes.
Its also about bringing improvements to patients.
2 videos of people as part of the project
A family with a newborn child and they're immediately told
that there is nothing we can do to help, serious medical issues
and there is no help. But their attitude was that if you just have
one day of this life , then plant a seed for others. So they joined the
project to help research and the body of knowledge, even knowing that
it would not help them at all.
A survivor of aortic disection , but has the potential benefit of
joining the project. Because if the gene behind it can be found then
they can quickly and easily ask whether his sons carry the gene and
if they do, then there is preventative action that can be taken.
So a potential benefit within the family .
A third story involving epilepsy. going round many hospitals and seeing
many consultants and multiple tests often daunting, NMR, lumber puncture.
At the end of the day undiagnosed and having doses of antiepileptic
medicine to try and control the epilepsy for Jessica and that was not working.
The only option going forward was to increase the dose, powerful
medicines which have significant side effects. The family had the chance
to join the 1000,000GP . The questions were can I get a diagnosis, can
I get a treatment and for the parents, if they had another child ,
would that child have a normal risk of eplipsy or the same risk .
A samll amount of blood taken from the child and the parents, whole
genome sequencing of all 3 people. That produces an awfl lot of data.
We have about 3 billion letters in our genome. 20,000 genes are about
2% of that , swimming around in there without any punctuation
or paragraph marks . If we pick normal healthy caucasian males
there would be about 3 million differences between us.
In Jessica case about 6 million differences , something she has but
neither parent has as both were healthy.
Out of the 6.4 million differences, 700,000 were known to be rare
, about 3000 would affect a protein , 67 were not shared with her
parents and 1 was linked to a gene previously associated with
It was a gene that encodes a protein which moves glucose across the
blood and into the brain , a glucose transporter. If you can't
move glucose into the brain , the brain does not get its normal
energy source and so the symptoms of epilep[sy were symptoms
of hypoglycaemia , as a diabetic would have.
Only 500 cases of this gene known globally , so for almost any
clinician they are unlikely to see 1 such case in their life.
Certainly not 2, so without a large database, no clinician has any
learning to go from . There is out there in the literature some evidence
of a treatment. We can make our own glucose from fat , if you
don't have a source of sugar. So she was switched to a low carbo, high
fat ketogenic diet , which provides an alternative energy source for
her brain and she is now a lot better. She has some symptoms from a brain
starved of glucose for her fisrst few years . Somilar to Atkins diet
but goes a step further. Her genetic changes are not shared with her
parents. Perhaps they were healthy becasuse it was a recessive gene
and both passed on that gene and so a 1 in 4 chance of a child haveing
the same outcome. But not the case, it was a spontaneous arriving
mutation, so now the parents are confident that if they have another child
there is little chance of a similar epileptic symptom child.
Not always such good news but shows what can be done.
Do you think its an opportunity to take control of your health
or play ostrich and hide your head in the sand.
The clipboard passing round is
Would you like to have your genome sequenced or not.
No one has put down "No" in this outing for this survey.
A printed volume for chromosome 21, one copy, the smallest
chromosome we have. Printing is double-sided , narrow margin,
4 point font.
Q: how do you know its right?
The first time we showed one of these books, at an exhibiotion
and someone turned up and looked at it and said there
was a mistake, pointing to the precise place.
.......... Its printed upside down.
The printers had bound a page upside-down. That is called
a DNA translocation , and that can also cause symptoms.
The genes that encode for protein in this printing
are in uppercase. If the whole thing was printed, in the same print
size, it would be 130 volumes.
This chromosome is the standard one, normal, for academic research
So why do most people think having their genome is a good idea
and why a couple of people do not.
If you have huge numbers of datapoints , it would be interesting to look at
gene type clusters. One group of people who appear to be perfectly
normal and another group also normal, but not quite the same.
Could that show some kind of evolution ?
So you would donate your genome for research purposes, to improve
the knowledge about what is normal. I think a number of people
join the project for that reason.
If it costs 1000 dollars to get it done , then why not get it
for free, assuming you're allowed to download the raw data.?
You have to pay extra for the raw data but you can get it. No one has asked
for it yet, so I don't know what the charge would be. The difficulty with that
3 billion bases on a disk, is having the bioinfomatics pipeline to
Aren't there sites out there to interpret in some , maybe limited , way?
So if you have your genome on a disk, you could upload it
somewhere to ask whare are the diffeneces in my DNA to everyone elses.
The challenge then is the difference between you and me , there will
be about 3 million differences. You'll find 3m changes , but how do you
find out what maters. Thats where you need clinical skill
and a bucketload of medical history and case notes.
So perhaps a wikipedia type structure listing all the relevant changes?
So what is relevant, without medical history.
So SNIPS which are Single Nucleatide Polymorphisms , locatins in the
genome where we know there are differnces between various populations.
Some SNIPS are associated with propensiies for certain disease types
, so 23andme is like that. Send a swab off to them and get a SNIP
report. We don't use them in clinical practise at all. So its difficult
whan someone says to us, can you interpret the data from 23andme, when
its not a test we use. It can be interesting , but not enough knowledge around
it to have proven clinical utility for NHS.
On the nothank-you side
I could go into a long spiel between the difference of the agenda of the
patient compared to the doctor, which are very different. Essentially, the
reason is , if I'm well , I've no interest in getting any investigations
done, or even having anything to do with any doctors.?
Personally I'm also not unwell and would take that view as well
and say I'm not sure that I want to know that I will get cancer
in 20 years time. Because I'd be eating well, not smoking
, exercising regularly anyway. So it wouldn't change what
Its also the fact , that you get older, you will die from
something and so don't worry about your health and wasting
time going to doctors and spend more time getting on with
There is a balance, some conditions are so much more treatable than
they were .
I said no , partially to get a response because I'd quite like to know
what could be useful to me. I'm not convinced at the moment
that they are predictive tests, coming from genes. Which meant I
could look out for the bowel cancer in 10 years time or become
aware of an issue that I would suffer from?
Many patients come forward because they are in a very different
situation, they know they have something to be found, even desparate
to find out.
The people we talk to , who are not affected by rare disease, for them the
concern is more paramount because the question is lesser.
We probably do sonmething like a risk/benefit analysis but not putting
it in those terms. If there is no benefit to you or your family
then there is just anxiety. Will they discover a gene for anxiety.
Will I get Parkinsons when I'm 50, do I want to know whether I'll
get Parkinsons at 50. but if one of my kids was ill , then
#all those such concerns would go out of the window.
So many illnesses are a combination of genetics and interaction with hte
environment, how far down that route.?
There are a lot of conditions that we have a genetic risk and then
its to do with the environment whether we actually express that
condition. The project is not looking at the environment and we are looking
for really strong genetic factors or absolutely causitive ones
but a lot of work to be done on that.
Would that be epigenetics?
We're not looking at epigenetics
Genes change throughout a lifetime , things turned on or off , so epigenetics.?
Not just epigenetics. I only made this arm once , all the genes needed to
make arms were active only once. Teeth I need twice and hair I need
all the time. So different genes are switched on and off at different
times and in response to illness a whole different suite of genes
is switched on and that is probably a key part of the lower case
text in this book. Between genes are the switches, some are on/off
switches some are dimmer switches. But most of our genes are not
used most of the time.
I was also wondering about the spontaneous chenges throughout a
lifetime, maybe you had a gene sequence as a child , would you want one
when you are 60?
At the level we are looking at the moment, we wouldn't find
differences like that, apart from sequencing a cancer genome,
which definitely would be different.
I read something about a study of identical twins , presumably
originally identical, but they had changed due to different lifestyles
We accumulate mutations. If one twin smoked , then they would accumulate
mutations in his lungs much faster than the nonsmoking twin.
We would not sequence people at birth and then later but may
sequance a tumor and tissue from the same patient, spot the
Are you suggesting you can resequence parts of our body ?
The cell sequence of cells in my left hand should be the same as my
right hand, but if I had lung cancer in one lung then the sequence of that
cancer would not be the same as the sequence of the other lung,
because cancer is a genetic change that causes undifferentiated cell
Bu tin terms of medicine it is possible to get in there and resequnce
When you say resequnce , when we say sequencing we mean identifyig
what the sequence is, we cant go back and change , we can't edit it.
We can't set it back to zero.
That sort of technology does not exist?
It probably is coming , but not currently.
With cancer there is multiple genetic changes in a tumour . Its not a
question of a single mutation and then you get cancer.
Cancers are dividing rapidly and accumulate additional mutations
all along the way. Its a complex catch-up trying to keep on top
of a cancer's mutation load.
If it was a single mutation then that would be easier, but its
So another interpretation of the term mutation, is gene change?
I think I've heard that racial differences are in fact just due to very
small differences in the DNA. So if we compare an Eskimo with
an Aborigine , are there big difference, does it make interpretaion
The differences are genetic, why eskimos look like eskimos.
If you took a rare disease family as an example . Go back to 100,000
genomes being sequenced , about 50,000 in the rare disease arm.
That is something like 17,000 patients plus 2 close family members.
So the comparison we are doing is between Eskimo child and Eskimo
parent and then play spot the difference. Rather than comparing an
Eskimo with a Glaswegan.
The proportion of your DNA that reflects your appearance
is a tiny proportion.
I expect there is more to racial difference than simply
appearance , but your saying that even so, the differences are
Yes. And for the patients , the comparators are close relatives,
so they'd be sharing almost all the genes and thwen saying, which
genes are similar between patients with the same diseases , that are
not present in patients without the disease.
With cancer patients you took cells from the tumour and DNA
from something else, bilateral or ???
You want germline DNA , DNA you're born with and that is in
every tissue of your body, its just blood is the easiest one
to get hold of. We use the DNA in white blood cells and
Is there work to take non-invasive tissue samples?
For the types of investigations of patients we are seeing
, in the scheme of things, a blood test is relatively non-invasive.
There is some work to see if you can use saliva , but the DNA from
that is not so good quality. We sometimes try that if we have
a needle-phobic patient. Or children , it is difficult to take a
blood test from, we sometimes have a stored DNA sample.
For the kind of tests I do , a sample is stored pretty
much for many years.
How much blood?
We take 4 tubes with a few mL in each, so about a tablespoon
Is this process limited to 1 to 1 mapping , 1 gene to 1 condition
or is there perhaps a mathemtical limit on how many multiple genes
apply to one particular condition?
Primarily we are looking at conditions that are monogenic, one
gene for the disorder. We are also learning that more and more conditions
are pehaps polygenic in 2 or more genes. The work I do with
cardiac genetics we're seeing that quite a lot. So 1 significant
risk factor gene , then another gene variant and added together
they may reach a threshold effect, whereby you get the condition
Is it possible to have say 10 genes affecting one condition,
and you wouldn't actually pick that up?
Absolutely, look at height, perhaps controlled by 100 genes
and childhood nutrition as well to complicate things.
But then mapping back which genes it is , to the height
is too complicated withing the scope of this project.
We know some genes, say classical achondroplasia or pescle? dwarfism
, a single letter change in a single gene, you go from an adult of
average height to someone with achondroplasia.
But other conditions like Coronary Heart Disease ar edue to
factrs in multiple genes that are additive and work together.
different conditions work in different ways. Many common
diseases are due to multiple variants in many genes .
A slide covering peoples responses from other such talks as this.
Would you want to know your risk to disease or would you like to
carry on enjoying life.
Would you want to be reassured , on the flip side, if the answer was
you're to be healthy.
How would you share the info with family, if you needed to ( I've got
2 kids but also 3 sfiblings , if I was sequenced and found out that I'm
likely to get Parkinsons when I'm 50, what do I tell my brother and
sister , because 50:50 chance they share it as well).
Would it change your self-perception , your behaviour , your
lifestyle and should it , as we all should be living healthy lives anyway.
Have insurance companies started taking an interest in this?
There is a moratorium at the moment and there has been for a long
time . At the moment they all comply with the UK moratorium in that
they will not ask if you have had a genetic test and they certainly
will not ask for the results. But they can ask the simpler question,
do you or your parents have any of the following conditions.
So they can get genetic info , without asking about any sequencing
been done. That is only a voluntary moratorium, its not
statutory law , so they could decide not to follow it and call the
Is that true around the world?
We don't know the answer to that.
How might primary care change?
There will be some diagnosees that emerge , that will have a small
impact on primary care, as many don't have rare diseases.
In time with work on pharmo-genetics , then in primary care,
this particular drug , normally prescribed 20mg , but this patient
,post sequncing , would require less drug .
In the longer term there would be more tailored drug policy.
Its not just sequencing us but sequencing the disease. A few weeks back
I was holding a small DNA sequencer produced by an Oxford company ,
it fits happily in my hand, linked via USB to a laptop .
Uses a tissue fluid sample , its not running whole human genome sequencing.
They took that out to west Africa last year, and sequencing patients to
assay whether this sample contain Ebola. In the situation there,
someone would walk in and say I might have Ebola. Up to then,
that person would sit in a tent and if your still standing up
in 21 days , you didn't have it. If you did have it , then
you're dead. With the sequencing , you can sequence for the
infectious agent and in 2 hours you can say , positive or
your free to go. Treatment situations go from the likes of Ebola
to have you a viral infection , so take some paracetomol, or have you a bacterial infection then
we'll give you antibiotics and by the way we know which antibiotics will work.
What about long term conditions and things that are more multifactorial,
you have a percentage risk of something ?
There are some subtypes of diseases where the management will change.
Diabetes, we used to think was type1,early onset, and type2, later onset
associated with increased weight. We are now learning there are certain subtypes
of type1 for which the treatment is different. We had a boy at 16
diagnosed with type1 , given insulin 4 times a day as his treatment.
His blood sugar control was terible, a huge impact on his lifestyle.
There was a family history of diabetes and eventually they had genetic
testing and found out he has MODY or mature onset diabetes of the young.
And the right treatment for him was not insulin , but sulphonoreas?
an oral medication . So he came off 4 times a day injection , and his
blood sugar control is much better.
I thuink we'll be substratifying some common diseases. like this.
Learning more about tailored treatments, and that would filter
down to primary treatments.
Is the project part of interfering with nature?
Will the genome technology , is there a chance of it interfering
with nature's natural processes? , ??? dilution?
Elligible adults can choose to find out as a result of this project,
whether they are carrying a gene that does not givbe them
symptoms , because its recessive and if they are carrying the gene
, they could also have a potential lifepartner carrying that gene also.
They could then make reproductive choices , knowing of the 1 in 4
chance of a child having that monogenic trait.
So yes it is possible to change natural processes.
We have to accept that medicine is interfering anyway? There was a
recent article about people having Caesarians , are interfering with
nature, as there are now a lot more people with narrower birth
channels, than there were before?
So selecting against.
With small handheld devices in the future , do you see a point
where DIY home testing will become cheap and accessible enough for
people to hack around sequencing?
The Oxford Nanopore ? , I don't know the cost, but wouldn't it always
be cheaper to just send your sample in.
You could go around sequencing all sorts of stuff, ants , beetles?
The Startrek Tricorder .
There's bound to be people who'd like to mess around hacking this stuff,
these biomes, a new hobby?
It only tells you what's there, it doesn't enable you to change anything
Prior to the genome technology, I never found out whether in the
medic community or researchers themselves , whether there was ever a
system equivalent to Google search engine , where with exact medical terminology
for clinical expression of some condition , otherwise an unknown
rare disease . Put in some database search engine and come out with
possible diagnoses.? You go to a medical clininician , fully versed
in all the correct terms, bung in the clinical features and out comes
, perhaps ranked, possible conditions?
There are a couple of databases , that are free to use. One called
OMIM, Online Mendelian Inheritance in Man. Its not as refined
as you were suggesting, but you can put features into that and it
will give a list of a number of genes. In clinical genetics
we use databases such as the London Database of Genetic Conditions
and we can do just that, enter perhaps 5 features and get it
to tell me all the syndromes that have those things linked to it ,
and a drop down list. They ar e under licence and expensive
That would contain all the medical literature , going back to
the 1970s or 60s ?
There is Pubmed available to the public. You can but in the
searchbox , the relevant feature , and it gives a list
of publications that have those keywords in.
Extending on from that, is there a halfway house , for just ordinary
people, a sort of reverse dictionary. Those facilities are great if you
have the exact medical terms and specifics of clinical details to
search on. Is there a haflway house , where an ordinary person
can put in vague ordinary English terms and get out specialised
medical terms for them? I'm aware of some specialised terms like
subluxation , supination and pronation which Joe Public
wouldn't know , but you could eventually zero in on those
exact terms and then progress to OMIM and Pubmed?
Part of my experience as a clinical geneticist , is learning
which of those terms are likely , when confronted in a patient,
could mean an underlying diagnosis. I'd talk to them and ask
about features that I know to be related to that kind of disease.
So a lot of experience and also the tools . There is a system trying
to unuify the descriptive terms that are used, HPO Human Phenotype Ontology, terms
and trying to standardise those across databases and websites,
so the communication between doctors is clearer.
The money alowing this project to progress, where dioes it
come from and is there a chance of selling the data date derrived, to
those , the NHS will be producing responses . So the economics?
The project is funded by NHS England , in the region of
550 million. A lot of money, but the people with rare diseases have
a long term condition and especially if you can impact them young,
its not difficult to imagine the cost savings, saving the project
11 paharma companies have already paid for access to the data ,
a quarter of a million pounds each. Significant sums but not against
550 million. They've paid to see an anonymised version of the
data and to run some analyses. They can't copy the data ,
they have to run their analyses on NHS England servers.
So the model is that its a reading library , not a lending library.
Then if they discover anything , they still don't own it .
They will still have to buy anything that they then discover off NHS
England. At a price that would reflect the value of that discovery.
So not one price fits all . The aim of the project is not to make
money , the aim is for patient benefits. The NHS is not in the
drug dicovery game .
I was just concerned the NHS could discover something very
valuable and who would share those profits?
The knowlege we accumulate will be exportable . We're leaders in
genomic education and we're being asked , by others around the
world, to teach and share what weve learnt. There may be a revenue
stream there, but not the really big sums that drug companies make.
So what happens to the data, who can see it. THe patient identifiable
data , with a name on it, only comes back to the clinical geneticists
who are looking after that particular patient. Anonymised data
is visible to 11 pharma cos, but also visible to groups of
academic researchers , registered to use that data, which is
medical history and genetics but anonymised.
So its not just drug companies that can make important discoveries.
That data refers to one particular individual or a collation of loads of peolple?
The anonymised data is all of it, so you can start doing comparisons
between whole groups of people. If there is a disease that only occurs
500 times globally, you need to look at all those datapoints to
find the commonality.
How do the researchers approach this?
The research groups are all based around a clinical disease,
called GSIPS , Genetics Interpretation Partnerships. They will ,say,
we're researching the genetics of asthma, they'll register as a consortium
or collaboration , for access to do that , in order to research asthma.
They cant then go off and research diabetes , which someone else ids
registered for. Importantly, when patiens join the project, part
of the consent process , the 40 minutes. Part of that is to give
consent for academic and commercial research. If people
change their mid=nd, they can withdraw later, even after being
sampled and sequenced. After they're own personal result, they
can then decisde to withdraw, everything would be withdrawn.
??? to make their results public and be exploited. I wondered
if a similar thing with the NHS. Its in the public interest
to make the info public ???. ?
This dat ais not publically available. Its held on NHS England
servers. You can't copy it out. If you want to make a discovery, you
have to write programs that will work on thise swervers.
Send your program to them , they'll run it and send
you the results. But they know what your results were , they
know what your program was , what your research interest was.
The same whether you are an academic or commercial researcher.
What they charge later, may be different.
Could an academic group ??? could follow ???
They could be doing exactly the same thing. It could
be a straight race. That means patients would get the discovery
This seems to be a British database. How are we getting along
globally, as presumably other than Germany, there are people doing
There are big databases of normal genomes , like the EXACT ?
databse . We often use that when we find a variant in someone's genome,
we look there and see if its found in the normal population.
The databases are being produced in a compatible , workable way?
Ideally all this data would be shared on a central server and
thats not happening. In this country there has been silos? of data
about different conditions. The cardiac world that I know
about, there was different research groups had their own silos of data.
This project is about sharing the data because its so much more
powerful if its shared.
So you're saying we ought to be working towards it but we're
not doing very well ?
I say it ought to be universal , I think we're good at it in England
and this project will improve that more.
THe project is initially in England rathe rthan the UK.
This project is a global leader. There are other projects, in the US
, Canada and France which will do similar work but they're not so far
advanced as this project.
Hopefully they will use databases that talk to this database.
What happened to the Icelandic database?
This was Decode Genetics company formed about 10 years ago.
The govt noticed they had a highly homogenous population , a lot
of in-breeding . So they had very good medical records and births deaths
and marriages going back 800 years in writing. The govt formed that company
to analyse that data , find inherited causes of disease and very
controversially , they set it up us an opt out system. So they said to all
Icelandrs , we're going to commercialise your medical data unless you
tell us not to. Instead of it being an opt in system which effectively
thios project is, come and join if you want to.
They had a high number of opt outs and I'm not sure that it got them
very far. If anything perhaps a model of how not to do it.
In the sequencing, you always get 2 letters per chromosome.
Is there a technology that would sequence each individual strand
and would that be helpful to identify diseases?
You always know what the other one is, because they always pair.
If you're reading an A then on the other side there will always be a T.
Same C and G. And the reciprocals. So you only sequence 1
strand because you can always infer the other.
But each locus could be twizzled round either way, so you don't know
an individual strand, you don't know which letter belongs to
You know what order they appear in and the orientation , where a gene
begins and ends bedcause there are certain sequences in the DNA
that always occur at the beginning.
So if I got one change on 1 strand and another letter change, I would not
know they were on the same strand or different strands?
You also don't know which strand is the coding strand because the
gene could be on one side or the other.
When you get a DNA sequence , are you sequencing just 1 strand?
Yes, but you can always infer the other, from the complementary nature.
You have pair of chromosomes , sorry i should have said chromosome
rather than strand, one of the pairs?
So rephrasing. Each chromosome is 2 pairs , 2 letters , but can you figure
out which letter belongs to which pair and wiould that be useful
for picking out diseases?
Yes , you probably can because we have about 3million
differences between us , so with any particular family
we are sequencing , you will be able to identify which chunks of
chromosome have come from the mother and which from the
father . The DNA of each parent is mixed , not totally
randomly , but in random blocks
And does that make a difference to disease progression, 2 mutations on
one chr as compared to 2 mutaions on different chr?
Some ar edominant so yes. If you think of a simple model
of DNA encodes a protein that does something. We are all protein,
we are either made of protein or stuff that isn't protein
but was made by an enzyme which is a protein.
So if you make a small change in the complete chr, the most likely outcome
is that you've broken it. Its very difficult to make a change that mends it.
But I've 2 copies of everything, one from mum , one from dad , so
for a lot of disease , as long as I still have 1 working copy, I still
have an enzyme that does something and you don't notice.
So the only issue arises when I get a broken copy from mum
and a broken copy from dad. So vthat's now recessive mutations
, and if I pick up both of them, 1 in 4 chance , I then have no
worjking copies at all and then I get the disease.
I was thinking perhaps you needed 2 changes to break the copting process?
They can get beastly complicated .
So go home and talk to othe rpeople about this as it will change
medicine, change how disease is treated and that'll only work if the
public accept it.
Monday 09 Jan 2017, Prof Gavin Foster, NOC Southampton: How hot will it get? Climate change insights from our past.
41 people, 1.75 hr
In the run-up to the Trump election ,
Trump was asked in a radio interview.
Do you believe Earth temp is increasing and what would yo u do with
respect of climate change. He said he was not a believer in global
warming and not a believer in manmade climate change.
This view is quite pervasive in USA politics. I think its
founded on a whole bunh of misconceptions. On how the climate
system works and what we know , and what we don't know about that.
In Dec 2015 Ted Cruise said the current computer models ,
used to understand global warming trends are profoundly wrong
and inconsistent between evidence and the data.
I want to show here that he is the one who is profoundly wrong.
When we look at the climate models , they do do wquite a good job
of matching the warming that we've seen over the last 100
years . Relative to 1880s to 1900 graph, relative anomaly
to that time. Gives what the climate models give, given the
forcings on climate , that we've reconstructed.
A pretty good match. For the last year , we're bang on the
middle of the prediction spread. Thats not to say there are'nt
some legitimate reasons to question the climate models.
They are not perfect representations of the climate system, they are
in massive supercomputers. They break the Earth into little grid squares
and try solving the equations of state, for each of the cells. Because they
break the Earth down into squares of a couple km across, they're omitting
some parameters , like how clouds and rain form . Its particularly
how these paraameterisations that make the models less than perfect.
There is a lot of tuning that needs to be done, they don't like that word,
where you tweak certain parametrs to fit observations.
The historical record is used to tune the model so it fits the rest of the
record. But they do a reasonable job of simulating
The multi-model mean from the last IPCC report, the average of 36 or
46 climate models. The result looks like the temp distribution over the
planet in reality. Also included is the difference between the observed
and the models. It looks about right but in detail there is up to
3 degrees differing. Then between models there is about 3 deg C
differences . In one way Ted cruis eis right , we do have to
question this model. Then when we use these models to predict
1850 to 2300 from the last IPCC, the observed tem prange
and differnt scenarios of how warm the Earth might be.
The bounds reflect the uncertainty in those predictions.
It is legitimate to ask, how reliable are these projections
for our future, given we know the models aren't perfect.
Predicting future climate is not just about climte
models. Climate science is old 150 years. I want to tell you about
what we call equilibrium sensitivity, a measure of how
sensitive the Earth is to change. I'm a geologist
and I want to use the geological past to tell you how we can test
this understanding. The central tenent of geological theory
is that the present is the key to the past, the uniformitarian
principle , the leading light for geological research. That means we
can study processes i nthe present and it tells us how rocks were
deposited in the past.
I want to turn that on its head, and is the past the key to ur
warm future. What can we learn from looking at the climate of the
past. The main driver for the climate system of Earth is the Sun.
All the energy that drives the climate system. We get about 340W per
sq m, to the Earth surface. Of that about 100W is reflected back and
because the Earth has clouds and ice-sheets.
In 1820s Joseph Fourier used black body radiation theory
, still a valid theory about how things respond when radiation is shone
at them. He calculated that with 340W coming in and 100W
bouncing back, the effective temp of the Earth should be -17 deg C.
What he knew then , and us now, the average Earth temp is higher than
that, at 16 deg C. The difference of 30 deg C he put down to
the blanketting effect of our atmosphere. That was in 1820, we call it the
greenhouse effect, he did not call it that.
In the case of the moon, with no atmosphere, the effective temp
of the moon is about -0.5 deg C, due to having a different albedo
to Earth, grey rather than blue so reflects more incoming
radiation. On face facing the Sun about +120 deg C and other face about
-150 deg c. Thats what Earth would be like without the atmosphere
The difference between 340W and 100W is absorbed by the Earth.
This is short wave radiation coming in and radiated back as heat
ie long wave radiation. That heat is then trapped by greenhouse gases in
the atmos, some gets emitted out the top , some gets emitted back down and
heats the Earth surface and cycles on.
The E in radiative balance , so incoming equals the outgoing.
We are not relying on climate models to show the power of
CO2 in changing the climate. They are relatively straightforward
physical observations , known for a long time.
A plot of wavelength of light coming in fro mthe Sun . Most of that
Sun radiation is in the visible part of the spectrum , thats why we see
in the visible part of the spectrum.
Short .5 micron wavelength, then a long tailoff. Also included is what we
measure at E surface. Its not the same as what is coming in, because the atmos
is absorbing some of that short wave radiation.
We can do the same thing, looking down at the E surface, for the long wave
radiation coming out. At the E surface the spectrum coming out and then
at the top of the atmos also . A lot of stuff is missing that is trapped by the
atmos. That radiation is heat and in effect this is the greenhouse effect (GE).
The difference between the 2 curves is the GE. It can be measured with
relatively easy technology, no climate model needed.
The different gases absorbing different particular wavelengths of
radiation . Methane absorbing , Nitrous Oxide differnt wavelengths, CO2
, water vapour with lots of absorbtion spectra a powerful
G gas. Sum them together and that is the amount of radiation being
absorbed by the atmos.
This was recognised by John Tindall, used to teach at Stockbridge
School, before leaving to Germany and becoming a great physicist.
In 1859 he determined the GE was predominently down to
water vapour and what he called coal gas which we call CO2.
This understanding of how the GE and atmos worked was really honed
in 1960s and wanted to shoot down aeroplanes with heat seeking
missiles. You need to understand how heat is absorbed i nthe
atmos in order to target planes with missiles.
How the various gases in the atmos changes the heat capacity was
tied down by the military. The same physics that underpins climate
science , underpins the building of missiles. So which of those gases is
the most important. In terms of driving the GE , water vapour and
clouds drive about 70% of the GE. CO2 and other gases being about 25%.
Anyone boiling a kettle or having a shower on a cold day , knows the
amount of water vapour in the atmos , depends on the temp.
The higher the temp , the more vapour in the atmos.
Water vapour responds to temp change, id doesn't drive temp change.
You can't drive changes in the strength of the GE by changing
water vapour, because water vapour only changes , if you change the
temp. Change the temp, the vapour will make that change bigger
, but you can't drive changes in the GE . You can only drive
changes in the GE by changing CO2 and the other non-condensing
G gases. Those gases stay as a gas, regardless of the temp of Earth, or
at least over normal human-friendly conditions.
So we're burning lots of fossil fuel, burning lots of trees, making lots
of stuff out of cement. The consequence of that, is CO2 has rocketed.
Since we've been measuring atmos CO2 its gone 320ppm in the 1960s
to 404ppm last year. The more long wave radiation , radiating from
Earth surface, being trapped by the atmos, must mean heating up
of the atmos as more and more long wave radiation is being trapped.
Data visualisation from 2016 , by Ed Hawkins of Reading,
called the climate temperature spiral. Showing how the temp
has changed over time. In 1850s1860s, early industrial times ,
0 degrees, then in the 1950s/60s it starts to kick off.
In early 2016 we were touching 1.5 deg C. That was mainly due to the El Nino
. Regardless of the inter-annual variability , the temp has been incteasing
as that CO2 has been increasing.
One way scientists talk about sensitivity of the Earth to CO2 changes in
the GE is Equilibrium Climate Sensitivity. It is a useful metric
for how the Earth works and how climate models work in
comparison to the Earth. ECS is the mean surface temp change
for doubling of atmos CO2. You have to wait for the system to play
out, to reach the new steady state. By that, all the changes that are
going to happen , have happened, its reached equilibrium.
If we look at the radiative forcing , the change in the radiative budget
of the Earth. Doubling of CO2 is only 4W per sq m, a small amount,
but will have a dramatic change on the temp.
So take a ball of rock floating thru space with a thin atmos with a
bit of CO2 in it . If we double atmos CO2, we have radiative
forcing and we have a temp response of about 1.1 deg C.
Thats known as the Plankh Response , based on the black body
radiation theory that Fourier used and Stephan Boltzman
used it in 1878 co calculate Earth climate sensitivity.
Its missing out a lot of processes that happen on the Earth ,
it has plants, atmos with warter vapour ,oceans .
And if we double the CO2 on Earth we don't really
know what the response is. One of the main uncertainties in
climate science, we don't know how sensitive the Earth is to CO2 change.
We have some good ideas but don't know exactly.
Partly because it is a very complicated system and we are essentually a water
planet. If you increase CO2 , temp goes up, then you evaporate oceans
more , wv in the atmos goes up , causing stronger GE which causes the
temp to go up. That continues to a certain degree , in a positive
feedback loop , or a vicious circle. A bunch of these, not just wv.
Sea-ice , land-ice , clouds, peat-bogs, soil carbon etc, all different
but +feedbacks. It has -feedbacks as well , but the net efeect of
changing the balance is a + amplifying effect.
This was recognised by Psente Arhenius ? in 1896 he published a
paper On the Influence of Carbonic acid on the air, upon the
temp of the Ground. He was a chemist, getting a Nobel Prize
in 1906 on electrolytic theory of dissociation. He was not a climate
scientist , douing that s a hobby. In Victorian times , the big question
, why do in Scotland , Norway, places like the alps, why do we
see evidence of there having been a recent cold period.
They knew that quite recently the Earth climate had chenged from
the warm climate we are now , back to something colderr.
We call that now the Last Glacial Maximum about 20,000 ya,
ice stretched down to Bristol . In the Uk we see lots of U-shaped
valleys , including striations where glaciers scraped the rock
clean leaving scour marks.
On likes of Salisbury Plain you get erratics , even Medbury on the South
Coast. Deposits left by ice etc and late 1800s everyone wanted to understand
the mechanism behind this. He looked at what changing the carbonic
acid content of the atmos on Earth climate and could this explain
we we have cold and warm periods. In that paper he recognised the
actions of humanity would increase the CO2 content of the atmos
and warm the Earth. He was from Sweden , which is cold and he
though that was a good thing. He even wrote to the president
of Sweden to say this would be a good thing to do, so we could
grow grapes in Sweden. That ice-ages could be brought about by
decreasing the CO2 contrnt , 60% of the present value, pretty
close to what we think now. He recognised if we double atmos CO2
we'd warm the Earth surface by 5 deg C. That was in 1896, over 120
years ago and thats pretty much close to what we think niw.
He used a slide rule and a year of dedicated maths, we now
use a super-computer , interestingly takes about the same time.
The calculations are now done on increasingly small grid
squares on computers like the NASA one.
We find despite the increase in computational power
we are no closer to narrowing down the uncertainty in this
A plot against time of use determinging ECS.
Stephan Boltzman 1878 , Arhenius in 1896
a gap and then 1979 the first climate models came about
and the Joules Charney? report with a mean of about 3 +/-1.5
encompassing the range of Boltzman and Arhenius.
Then with the sets of IPCC reports we've not really refined that
range. The last IPCC report puts is at about 1.5 to 4.5 deg K
per CO2 doubling.
The vague band in the plot is the uncertainty in the climate models.
Each model hasa different ECS, its an emergent property of the
model. Its not a number you choose , it emerges from your complex
model. Some models have a low sensitivity and track low,
some high sensitity and track high. The uncertainty in our future
depends on how much CO2 we burn, the choices we make and
also the sensitivity of the climate system. We could have a very
sensitive climate system and then if we do a lot to mitigate
CC then we still end up with a lot of CC. Alternatively
with a low sensitivity , we can carry on burning a lot
, it will have dramatic effects but it won't be as bad as if
My research is about , using the geological record to try and
predict and understand our warm future and in particular to better
understand climate sensitivity.
How climate over the last 140 my has evolved . Over geological
time the climate has changed dramatically, through natural reasons.
Video - Break up of Pangia the supercontinent. Early Cretacius with dinosaurs ,
then moving forward to a world more like our own. America breaking
from Africa, India moving up to collide with the Himalayas,
the atklantic opening up. All these things change atmos CO2
and global temp. Moving through the last 30my the temp is yellows and
oranges , then about 18/19mya the temps in the oceans
are about 34 deg C, now the warmest ocean temp is 28 deg.
For Antartica about 34mya and the north hemisphere about
10mya you start to see the fisrst appearance of ice.
The Earth occupied those different climate states, so if we can
understand what caused those different climate states , we can better
understand how the Earth responds to changing CO2 and changing
climate. The Earth cycles through these ice-age/greenhouse states
quite regularly about every 500 million years.
A supercontinent forms, breaks up, reforms etc. for about
the last 3.5 bilion years, due to having a convective mantle that is driving
plate tectonics . During a supercontinent break-up phase a lot of
CO2 is coming out of the solid earth at rifts , into the atmos.
Lots of volcanos and so a high CO2 warm GE climate. When the
continents are coming together , forming mountains, less
volcanos . Mountain building is a sink of CO2, the clay minerals
that are formed in mountain rivers remove CO2 from the atmos.
On top of this grand 1/2 million year cycle , there is cycling of the
climate on shorter time scales , due to how the Earth orbits around
the Sun . This was recognised by Mulacan Milankovic ? , trying
to understand why we had cold climates relatively recently.
In 1920 he proposed that glaciations were driven by orbital
changes of the Earth. The Earth is influenced by the other planets,
every 41,000 years the tilt of the Earth changes from 24 degrees
to 22 degrees, a change in the elliptic nature of the orbit, and the
way the Earth spins , the precession like a spinning top .
It has little effect on the amount of sunlight reaching the Earth
but the distribution of that sunlight , through the year
and where the maximum insolation is, changes as the orbit
Examples of cold orbit and a warm orbit . A cold orbit is
when there is a small tilt , the north hemisphere is colder .
A warm orbit the northern hemisphere is tilted towards the Sun
i nthe summer. These orbits affect the local temp in the north
hemisphere. In a cold orbit phase you get some ice-growth one
summer , that ice stays , that increases the albedo because it is
reflective, more sunlight is reflected and then on a global
scale via a bunch of feedback effects also causes atmos CO2
to come down. That all leads to more cooloing , more ice
growth, more CO2 stored in the ocean, more cooloing and so on.
When we have a warm orbit, the ice retreats, decreases the
albedo, less CO2 stored in oceans, more warming ...
These orbital cycles are quite a small influence on the
earth's radiative budget, but through the bunch of feedbacks they
can cause dramatic climate change.
A map of what the Earth looked like 21,000 years ago
, the UK icesheet stretching down to Bristol. All N America is
covered in the Lawrentide icesheet. So much ice was locked up
in thr northern hemisphere that sea-levels were 13om lower.
The temp in Antartica over the last 350,000 years , cold, warm ,cold
cold climates quickly go to warm. Those cycles are driven by the orbital
parameters. Plot of atmos CO2 that goes with that climate change.
Warm climate = high CO2 280ppm, cold climate about 200 ppm.
0 to 60my of Earth history. The oxygen isotope composition
of some bugs that live on the sea-floor. This shows climate evolves
over time, cold and warm temps. 50mya we were very warm , 12 to 14
degrees, lots of warm ocean . Through time, things have cooled down .
That is the transition from a greenhouse to an icehouse .
Antartica in the warm state not much ice. About 34mya we had
rapid growth of Antartica. Another time interval , the Pliostein,
3mya . Looking at the range of climates from cold to
warm including a bit warmer than today and then super-warm
climate of the Ilioceine .
The climate predicted for 2100, looks a bit like 3million ya,
not as warm as the Ileoceine 50mya. So looking over 50my
we are sort of bracketing a possible future.
Its not that easy to reconstruct climate of the past. Today we can
go out and measure CO2 content even with satellites.
We measure the land and sea temps . How to go back in time
and measure the same parameters. We have to use fossils
, phoromonifera? single-cell protis lives in the surface part of the
ocean. About 0.3mm across , make their shells from Calcium
Carbonate , like chalk and a lot of rocks. A lot of rock is
made from the dead shells of these organisms.
We measure the chemical composition of the shell and reconstruct
various climate parameters.
A pic of the UK with a blue patch , a bloom of plankton
seeen from space. thousands of billions of individuals forming
enough of a colour change of the ocean to be sdeen from
space. When they die, they sink thru the water column ,
marine snow, accumulates in vast quantities on the ocean floor
at about 1cm per 1000 years , an ooze of this dead calcium
carbonate. We take a research vessel with a big drill rig on
it , put a core into the deep sea . It can operate in many 1000s of
metres of water and drill a core many 100s of metres
drilling back through the time of sediment layers.
We can work out the age of when it was deposited , recover the
shells , take them to the lab to do analysis on them.
We can work out their chemical composistion and things like the
magnesium xcontent of the phorams tells us the temp of the
water in which they grew.
A correlation between Mg content of the forams and the temp.
Then we can work out ocean temp. Most of my time is spent
working out atmos CO2 content from the past, something that is
really only done at Soton, not many other labs in the world can do it.
We want to know the temp of the earth that had double CO2.
There is ice-core record from Antartica . As ice accumulates there,
it traps the ancient atmos in the ice. We can take a core , get the
gas bubbles in it and work out atmos CO2.
One of the ice-cores for 0 to 800,000 years , wiggling trace
from glacial to interglacial cycles. But ice cores only go bck
800,000 years, we want to go back 50 my, to look at those
really warm climates as those are more like our future.
My lab is used for Boron isotopes , take the foruniphera
, measure the Boron isotopic composition , that tells us the
Ph of the ocean of the past, the acidity of the ocean and hence CO2 .
Some Boron isotope data , CO2 data from ice-cores comparison,
not perfect but does a pretty good job of an indirect measure of CO2.
The main advantage is we can go back 50my, back to super-warm
A record for 50mya to 30mya when the earth was 9 to 14 deg C
warmer than today. For the interval 3mya CO2 goes from
about 400ppm to 300ppm, climate goes from 3 deg C
to remps similar to today.
The CO2 in 2016 was 404 ppm , last seen on the Earth
3mya and 2100 CO2 would be around about 1000 ppm
last seen about 45mya according to our data.
When we combine that temp info with the CO2 we can
We want to test what the climate models suggest.
50mya to 30mya period there is a bit of uncertainty in our
estimates , the maximum probabilty are around what the
climate midels suggest. 3mya again in the same climate model range.
And for the ice-core record of 800,000 years we'are in the
range of the IPCC models. That means temp changes we see
in the geological record are behaving the same way , the Earth's
behaviour , is behaving as the climate models would suggest.
The models are predicting a certain warm future , we apply that
understanding to the past. The temps we observe are entirely consistent
with the sensitivity and the CO2 change we reconstruct.
When Ted Cruise says that the models are profoundly wrong,
that is clearly not true . And according to our assessment of the
geologicl past the climate evolution is likely to follow
the mean of one of these lines, depending on the choice
of our emissions. We will probably move along the centre of one
of these bands, maybe on the upper end.
There is no doubt the Earth is warming due t the magnitude of the GE,
caused by CO2 . It is an old science, we've known about it for
over 120 years. The predictions from those early scientists
are continually being bourn out by new studies.
This hundred years of understanding is encapsulated in the climate
models and those models are doing a pretty good job
of predicting our future. The geological past is a good
independent test of how thise climate models perform.
A few quotes.
Sherwood Rowland ? , Nobel prize for discovering the ozone hole -
George Santiana ? - Those who do not remember the past are condemned
to repeat it.
At a recent RS meeting , one of my colleagues shouted down a
climate-denier questioner. "If you don't believe in the GE
try sleeping on the Moon.
You tell us the increase of CO2 we push into the atmos,
increases the GE. Presumably the atmos is a kind of insulation layer,
the reradiated heat off the Earth stays with us, therefore we get
hotter, is that right?
Why therefore if we put more CO2 up there , increasing what I'd call
insulation , why is that not suppressing the radiation
coming from the Sun?
Because the Sun's radiation is short-wave radiation, it goes straigth
through. The wavelength here .25 micron to 2.5 micron and that
is all at one end. Looking at the absorbtion band of CO2
they are in the 2 to 4 micron , water vapour is 1 to 10
Could you return to your slide where the models are along the bottom
and the amount of CO2 along the top. Can you put some kind of scale on the
upper one? The top ones represented different human behaviors,
can you give us some idea how those translate into the real world, eg which one of those would be the Tokyo Agreement, and how likely you think they are?
"Business as Usual" that is burning all the conventional fossil fuels ,
not just doing nothing but the economies grow and other countries
industralise . Then by 2200 pretty well burnt all the available conventional fuels, I think that is 6,000 peta-gram of C. It could go higher , perhaps 11,000 Pg. We've actually come of business as usual i nthe last couple of years.
The rise in CO2 has not grown , for all of industrial time the ampunt of
emitted CO2 has increased. Due to the switch to renewables , about 10 to 20%
of UK energy. The biggest thing is China is not burning as much coal.
The Paris protocol is perhaps hitting about 3 degrees by 2100.
For successive IPCC reports from the first to the most recent , they
were always on business as usual and now we're not, so we can
be pleased with that. Built into the Paris agreement is usage of
technolgy that does not presently exist. By 2015 there will be net
removal of CO2 from the atmos , we can't do that at the moment.
We've moved off the red plot. When I started doing this stuff, we
were always on the red one.
With economic cycles , proportionally how much effect does that have?
The 2008 crash did dent the rise in CO2 but had recovered in a year or 2.
It was noticeable in the plots?
Yes. In terms of growth of CO2 , WW2 was also evident. Those sorts of
big changes are evident but not stopping the overall effects.
When you were talking of feedback loops they seemed to be mainly
positive feedback loops , which do have a habit of running away
with htemselves. So something must be stabilising that to counter,
what are the negative feedback loops to give a stabilising
The main climate stabiliser is silicate weathering. The turning of
rocks into soil. That creates clay minerals, that mineral won't contain
all the ions that were contained in the original rock. A lot of those
ions move to the ocean , where they stimulate activity , locked up
in those shells basically . Those shells then arrive at the deep sea
and then they go into the mantle. Then they come out of volcanos.
The rate that rocks turn to soil is temp dependent. So the
natural way in which climate regulates . If it gets warmer you get
more soils formed, more weathering , more ocanic ion deposition,
more CO2 drawn out of the atmos and put in the Mantle.
When CO2 is low and low temps then the opposite happens.
The natural rate of CC , or CO2 change at least is about 20ppm
per million years and we are doing 100ppm in a century. We are doing
1 or 2 ppm per year which is a million times faster than nature.
The natural negative feedback isn't there .
I believe in Iceland they were trying C capture by pumping
CO2 into ? and it was solidifying . Is anything like that going
to have ??
Thasts the sort of tech that doesn't currently exist in a commercially
viable sense. Thats exactly what we need to decouple economic
growth from CO2 , is to put the CO2 in the ground,
carry on burning it , but capture it an put it into rock.
That was Matter from Ocean and Earth Sciences of Soton who led that
study. They put CO2 into basalt in Iceland and within 2
years they'd removed 90% of what they put down there,.
I'm not sure whether economically viable yet .
The economically viable version of that is C capture and
utilisation , making sodium carbonate for making glass
so stable to lock up CO2 for 1000s of years?
Yes in India . There is awhole bunch of these diferent technologies
that could perhaps save the day.
When its acidic , animals die?
Ocean acidification is known as the second CO2 problem.
The processes are better understood, the impacts on organisms is less
understood. CO2 is an acidic gas, it dissolves inthe oceans.
The oceans have acidified by about 0.1pH units , doesn't sound much,
but its a log-scale so a 30% increase. The concentration of H ions in the
ocean . It is a ffecting things like coral reefs and shellfissh shells in various
parts of the coastal ecosystem. They are more threatened by the temp I
think, especially corals. Its the temp rise thats killing them, not
What are you most worried about?
The ice-sheets , for me, is the biggest threat. They respond so sluggishly
that the big continental ice-sheets of greenland and Antarctica , they have
about 65m of sea-level locked up in them. They don;t react quickly,
we know from the geological record, they have not responded in hte past
quicker than 2m per hundred years. Its like a freight train, once you
start them melting and even if we have some miracle technology
that brings the situation back to a pre-industrial state, they are responding to
the climate 100 years before.
Those are the sorts of long term committments will happen whethe rwe
like it or not.
??? coal, ??? not absorbing, turning it into electricity rather than
heating up ?
Probably too small . Its like the energy release of burning the
fossi;l fuel is 100,000 times less than the energy of a single CO2
molecule in th atmos. The heat released to make a fossil fuel CO2 molecule
is 100,000 times less than the energy it captures in the atmos.
I think a blackish roof compared to red will not make much
On your last slide you said there was no doubt that the temp
is going up and humans are causing it. Presumably some of those people
call themselves scientists and presumably some of those are sincere
in what they believe and doubts. So how would you summarise for the
position of someone like that?
People used to believe the Earth was flat and they were sincere in that.
Before my interest in climate science , my PHd was on geo-chronology,
how old rocks are. There was a similar viewpoint htat the Earth was only
6,000 years old , but geo-chronolgy said it was 4.567 billion years old.
I guess its a similar situation. They may well believe that the
climate has not warmed , but I'd argue that they're view was not science
As a scientist I'd like to know whether or not what went on 50mya
is at all a good proxy for what happened in the last 100 years.
So is it a good proxy for burning coal in China , whatever it was
that happened 50mya and are you addressing that question?
If I was using the climate of the past to say , this is what the climate will
be in 100 years. So looking back 50mya it was 12 degrees in China
then that is what it will be like in 2 years time, then that would be wrong.
50mya CO2 levels were caused by natural processes , similar levels
resulting from burning coal, what about all the steps in between?
Thats why we look at ECS, the models run until they are at
equilibrium, which is a much closer state to the geological past
than the transient we are in now. When you look at the amount of
warming that happens in a climate model , on a 100 year timescale
, its about 2/3. The warming we see now , as a result of the forcings,
is about 2/3 of the full response. In the geological past
we are looking at the full repsonse, so we compare the models full
response to the past, then we look back at how the models handled
the transient , the shorter term repsonse. So not quite the same
, we are evaluating how well the models stimulate the ECS ,
which is best represented by the geological past , becasue there is no
transient that is an analogue.
If Trump and Cruise are climate change skeptics , do you know of
any political statement that in their uncertainty , have put some funding
into doing some more research , so they have clarity?
Anybody who is not sure , where the truth is, in a hypothetical
position , you would then say put some money in there and get some
With the messages they are coming out with , is the opposite really.
They're sure that nothing is happening. I've heard they want to close the climate
science section of NASA, put 3 billion dollars that is normally spent
on studying Earth climate, into the planet exploring bit of NASA.
Is there a falsification route , they could possibly go down, given
They've tried, particularly in USA. They've had so-called skeptic
group , Berkley Earth, revisit . I think a physicist from Berkley, Cal
, he had a lot of private funding , to show the temp records were
all cooked. That they were bogus, 150 years worth of temp
records and they came up with a record that was identical.
In the post-truth age it doesn't matter , falsified or not , people
will still vote for Trump. ?
In 2009 he was all in favour with the Copenhagen climate accord.
So he's changed his view since then, for whatever reason.
One of the skeptic things seems to be, climate scientists change their minds.
Go back to the popular science programmes of 40 to 50 ya, they were
talking about , when the next ice age was due, way slower than what we're doing,
but comparitively short-term effect compared to ???.
Whats changed from a climate science perspective from saying
the next thing up is ice-age , to the next thing up is extreme warming.?
I used to be a geo-chronologist and the reason I got into climate
science . I was sitting with a colleague of mine ,doing my
PhD and I was saying in ablut 1990 ,the Earth chucks out loads of CO2 from
volcanoes, it was all natural cycle . There is a lot of noise in the climate
ststem , and my colleague said, no. Now we can be pretty sure that
we're outside the envelope of natural variability. Wheras when you
were in the 1950s, it is the same mistake that all thse climate
skeptics say, pause in global warming, no , you have to look
at the bigger picture. Only 2% of the climate heat component is in
the atmos, 98% in the oceans, so long term trends are just different amoiunts
the ocans are storing. We cant measure the oceans well and we don't havd
records that far. Wen you look at thermometer records on the Earth
surface , that is only 2% of the heat of the climate system, so
of course it will show ups and downs. So its necessary to look at a scale
broader than those changes in atmos heat storage. Jim Hanson ?
in 1998 , stood in front of Congress and said We are now 95% sure we
are outside the envelope . There was purely the observational record to base
that on , then there is climate science that says there should be a relationship.
You make an hypothesis, test it, and thats what we're doing.
The guys predicting global cooling , probably gave up by then. You have
to act on the evidence as it is, and I think we've built a pretty good case.
Is there global monitoring of ocean temps now?
Yes, its called Argo programme, very expensive , floats that go
up and down the oceans, about 13,000 of them. But they only go down to
2,000m which is the majority of the ocean, the mean depth of the
oceans is about 3,400m so we are missing some, but it is quite well
mixed at that lower part. That is now but pre1990s we did nit have
much idea of that, it then depended on what temps ships had taken.
Its a multimillion dollar international programme that scientists
at the NOC take a strong part. The Uk have some floats but the USA by far
has the most. So it could be catastrophic if Trump pullls the
funding plug. These big infrastructure programmes, the US always lead on.
So water vapour and CO2 in the atmos. A lot of gases are being
made and utilised ,for instance in air conditioning systems that are
up to 1000 times more influential than CO2. Increasing wealth ,
leading to more AC use, are we seeing that making an effect
on the combined contribution. The volume of these will never be
anywhere near the CO2 proportion, but as their effect is so much greater?
So like CFCs and othe manmade greenhouse gases. I don'y
know . The main issue with CFCs is they react with ozone.
The ozone hole is healing .
I know for a fact from someone in the automotive indudtry and government,
they are developing AC gases that are only 100 times worse than CO2
but they are very explosive. ?
I guess the advantage is that its smallfry compared to atmos CO2 .
I think its 1000 Petagrams of C in the atmos, the other gases would be
minor in comparison.
On tundra permafrost melt and methane release , is that going
Its well known positive feedback, receiving a lot of attention.
Methane is 26 times more potent GHG than CO2. That permafrost
melt is not built into climate models. If there was a big disconnect between
the models and the geological record , then we miight first suspect
that process, but there is not. Also observations like little
methane was released by the Deepwater Horizon spill in Gulf of
Mexico. Most of that methane was digested by microbes in the water column
and there is venting methane off the shelf of Norway , shallow water 80 to 100m
and you can see the bubble plumes but flying a drone over the area, there
is no extra methane observed. Its just not escaping from the water. So
we think methane hydrates and permafrost methane will not be a big
role. Therre is a long term study that lookes at methane in a town
in Canada. In summer the wind blows in 1 direction and get a methane
recording , but over the years , with the current warming, there has been
no change in the methane content. Its probably turning to C and not
getting out, it is quite reactive and may just be digested and not being
released to the atmos in significant quantities.
Are you using boron isotope analysis for age determination or some othe r
That gives us the ocean pH, it tells us the pH at which those shells
Then get the age from the stratification?
Much as anthropogenic CO2 is dissolving in surface water , making the
oceans acidic , the ocean pH tells us the CO2 content of the atmos.
The more atmos CO2 , the more acidic the oceans are.
There is a whole cottage indudstry on dating of sediment cores.
Very simply we know roughly how much they accumulate thru time.
There will be regular reversals of the Earth magnetic field , the last one
about 800,000ya N becomes S and S becomes N.
That imparts magnetism on the segments that we can measure , that gives
us tie-in points. So 5m down in a core the magnetism has flipped the
other way, so 800,000 y old. Then some cores may have ash
layers in them , volcanic ash, and we can date those events.
From having a big archive of cores, some of which are very
well dated through thses different methods you can use the climate
cycles to "tune" your records. If you have a climate cycle in one
core, compare it to a well dated core and get the age that way. Also
by stratigraphy, appearance and disappearance of certain organisms.
We don't use boron for dating purposes as its stable. We do use
C isotopes for dating the top 20,000 years.
What is it we should be telling Trump ?
I think a sensible thing to do would be invest in green energy.
Look at the price of solar panels and how much they've dropped.
The oil industry has a lot of subsidies and a lot of infrastructure
that other energy sources have to compete with. Oil is delivered by
road , that infrastructure is already there, its subsidised in many
such ways, not just in exploration terms. Its built into our
very fabric and we need to help other technologies .
I followed the Romsey MP as part of a Royal Society pairing scheme
and I was in Parliament when the govt cut the subsidy for solar
panels from 95 % to 2 or 3% or whatever and it was devastating for
the industry. And devastating for green energy in general as why
would any industry come ot a country that does that. Build you
up and then cuts you down.
You could be really rich ,investing in green energy, look at China.
The west only thinks long term , the developing countries copy the
economic history of the west, such as everyone driving a car?
A big chunk of the Paris agreement was how much money should the
developed world pay the developing world to avoid making the same
mistakes that we did. That is something that Trump wants to
get out of.
Monday 13 Feb 2017, Dr Marc Molinari, Solent Uni : Non-destructive testing of railway wheel sets
16 people, 1.5 hours
This came from a study part funded by the RSSB , Rail Safety Standards Board. Funded by the rail industry and the govt, to ensure we can enjoy
A map of the Swansea area and a place called Oystermouth? and the
Mumbles. In 1804 a huge need for transporting coal,
iron ore , limestone from the sources onto canals and Swansea dealt
with all that and shipped northwards and eastwards.
In the Mumbles they did not have a road to Swansea but they had all
those materials. The first railway was established to transport that.
A carriage on wheels on rails pulled by horse, the oldest railway.
Q: no it wasn't Tyneforth ? had railways about 100
years before that, horsedrawn on wooden rails . You can still see
the Causeway Arch ? railway bridge, the oldest railway viaduct in
the world. For transporting coal from the mines down to the Tyne .
The Oystermouth Tramrail ? Company built this one, length about 5.5
miles. A few years after being built, the company asked for
permission to transport passengers as well, because there was no road.
The govt approved it. 48 years later the rails were changed from
1290 mm to 1.4m wide gauge. Ultimately 1.4m became the standard
for most of the other railways. In 1877 steampower replaced horses.
Just before 1904 they tried to use a battery powered , accumulator
car. Batteries back then , very rudimentary, jars with liquid
and metal. Very unsuccessful, the trams would not move .
100 years later we now have battery powered cars. 1928
electrification came about . Eventually a road was built,
revenue went down .
Things developed nationally, the rail network of 1963.
Then Beeching , loss of the small goods transport branch
lines. If a railway was not yused it was removed, removed
some that were used. The density of the network changed a lot.
The current Network Rail network high density in SE and the
Liverpool area and Manchester , Glasgow and Edinburgh.
Today there is 16,000km of rail track. Also lots of
private tracks , tourist trains etc. Looking after the
main etwork takes a lot of time testing it. Trains that
check the rail quality , driving along the tracks every day.
Also additional transit systems like london Underground and tram
networks. In 1994 UK was connected to Europe by the
Brittain has one of the densest networks in the world, just looking at
Europe , Britain has 20% of all Europe rail journeys.
That is about 65 billion km passenger journeys per year,
a massive figure.
Railways today, a comfortable smooth ride, lots of us use it. It is
often overcrowded in terms of people and timetables. Few slots
to put additional trains on the network. Its fairly safe , few
accidents considering the billions of miles travelled , probably the
safest form of transport.
So maintainence and looking at the wheels. A maintainence shed
with up to 12 trains at the same time being serviced.
It includes regular servicing, need emptying out of toilets,
oil checks. Contacts between rail and wheel , the wheels , brakes
, the under carriage for missing parts etc. A lot of activity .
Considering just the wheels. Every carriage has 2 boggies
each boggie has 2 wheel sets , 4 wheels , so 8 wheels per
carriage. A low estimate of weight 8 tons per carriage for modern
light weight ones, Siemens 700 series . So 1 ton per wheel bu tthe contact area is about
the size of a 2p coin. The whole weight of a carriage is on a contact
area about the size of a DVD. 8 tons is on the low side, much more for
goods trains. A commuter carriage overloaded with 100 people
may double that weight. The materials of carriages is very light
nowadays. There are heavy things like air conditioning in there .
Q: Take a classic 47 series diesel locomotive that was 114 tons
on 2 boggies , so contact stress on a loco was very high.
You could make diamonds at that pressure.
These contact poins are 1 of the safety critical points because if
something goes wrong there, it could derail the train.
With ongoing impact on public perception of rail travel
and the bottom line of finance of rail operation.
The wheels are made of steel it is the geometry that needs to
be looked at, is any of the profile lost. It is a special profile, they
are constructed to. The profile changes with use , wear and tear.
Ultimately they wear to the point the geometry goes below
the safe limit or it needs to be reprofiled.
There could be surface defects and also subsurface defects , inside the
metal. It could be delamination of steel inside, cracks developing from
the inside to the outside. Also the axle needs inspection.
Typically the wheels are pressed , high pressure and hot , pressed
on to the axle. 1 wheel is about 200Kg, 2 wheels 400k plus
axle similar to 600kg. With brake discs on it can easily be 1 ton .
Its not just 1 type of wheel, some have brake pads inside the
wheel like a car , also ones with track brakes that sit on the
outside . Track brakes used to be more common , brake blocks pulled
against the wheel rim. Siemens trains are going back to tread brakes
because they clean the wheel while its turning and braking so
a nice shiney surface.
Audience: mention of the 1:20 profile on the rim. Steering round corners and
stopping of hunting. In the old days there was no planned stiffness for the
bogie. So you had a wheel set that ran in horn guides. As it went
round a corner , there was nothing to control it until it hit the
horn box and then it was infinitely stiff.
When it wears, you go to a steeper profile , so when you go round a
curve , it wants to steer more and oversteers, corrects, oversteers and knocking noise.
Requires keeping the rim profile to 1 in 20, which means lathe turning
the wheels quite frequently , 100,000 miles, to keep the stability in the
vehicle. Later on they brought in planned stiffness , many components
are now rubberised , so you can go farther down the steeper profile .
The profile nowadays is called a worn profile, P8, P1 was the original
1 in 20. P8 means you don't turn off so much when reprofiling the
wheel. With lots of rubber, vertical , lateral and yaw dampers,
you manage the frequencies that a wheelset could pick up on
going around corners.
A lot of science going into all that, the tread profile and the angle
of the flange. Get the wrong angle on a flange , then you climb
and derail. You can get roll-up on a flange, a toe radius buildup
which can then pick-up on points and derail there.
Thats what we found when we did our research. Tight margins and
limits on those factors.
NDT - means you don't damage something. If want to inspect
internal material, you could break it and then know what was inside.
ND means a method that does not require breaking anything.
You can determine internal crystal structure without any damage.
An early example was the wheel-tapper, an engineer with a long
handle hammer , tapped the wheel, listened to it and based on the
ringing sound he cou,ld hear, healthy wheel or a crack somewhere.
They were very skilled people with very good hearing.
Frequency of inspection, Southern and GTR they have mileage
intervals, but there are also timeage intervals. After 60,000 to
80,000 miles they inspect bogie and train body . Every 32 to 36,000
miles a wheelset examination, measurement and gauge.
Compared to cars, longer interval. In comparison French TGV
trains , a daily automated railside inspection of the underside
and pantographs. Every 5 to 6 days or 4,500 km there is a ?
inspection. Every 18 days, traction motors, boggies are maintained
at the depot. A huge turnover required, building into the
overall running costs of trains. Channel tunnel trains also get weekly
inspections or about 5000km.
Manual measurements are common . The geometry of the rim,
a slide-rule arrangement that tells you the flange height and
thickness, if they get thin, the train could derail at
points. The Swallow gauge? , for looking at the toe radius
on the flange. A magnetic one, clamps on for measuring the tread height
at the centre. Once they go below a certain value, with tight tolerances in mm,
the wheel must be reprofiled on a massive lathe. Train drives into place,
the rail is replaced by the lathe that automatically removes , the norm
is 1.5cm of steel removed. Both sides at the same time. If one side damaged then
both sides have to be turned to match. The other pair of wheels can be
different dimensions but not co-axial wheels.
The lathe rotates the wheels, the carriage goes onto the lathe , the track
drops away,to allow the wheels to be rotated and machined down.
It takes about 45 mins to an hour. This metal removal can be repeated for 4 or 5 times . They come with about 8cm that can be removed , at the
end there must be aminimum of 1.5 cm remaining. Below that
and the whole set of 2 wheels and axle is remelted down for new wheels.
There is a hole in the wheel disc, inject oil at very high pressure
and the wheel comes of the axle, replacement wheels are pressed
on to the axle. GTR they don't do that , they send them off for recycling.
On the same axle , both wheels must bw the same diameter or they
would always be going round corners on straight rails.
Q: Is the train slower after taking a cm or 2 off the diameters?
T%he wheels turn a bit faster and the motors can cope with that.
Also the boggie dips down a bit but the carriage remains horizontal.
Q:Its a different motor driving each axle?
Not all of them, it varies, the 165 class, all wheels are driven ,
using a hydraulic transmission , then all wheel diameters must be very
much the same . For HST trains all 8 cars are undriven, each power
car has 8 driven wheels.
Where you have a driven set, you need to have the friction between
driven wheel and the rail. So what if there is ice on it, or really smooth, say brand new wheel and brand new rail, there is wheel spin. for that situation
there is a sand dispenser on the driven wheels. A tube near it, the
driver presses abutton and sand is squirted out.
On the other hand, this also damages the wheels and track, so always a
balance, get you get the train started .
Similarly with leaves on the line. Leaves sometimes block the wheels
, gets into track brakes. If the wheel does not spin for
some reason , it slides, once it slides , it gets a flat, rubbing steel
There are a ot of condition monitoring systems in use. How do you
measure the profile of a wheel. Do it manually when in a depot
but can you do this while its in service. A number of companies
have come up with systems that can measure trains while running past.
Using a laser projection and a camera, up to about 17mph currently.
So a log is taken for that train at that time , has an issue and should
be removed for further action.
Hand held devices that let engineers measure the profile .
Another system next to the rails with 8 cameras . These days you can have
accelerometrs in hand-held devices , as in mobile phones,
and you know the position and attitude of the device when recording the
laser scan lines , while moving the device.
The olympus system , in a depot , the train is still . The system clamps onto the rail , lifts up the wheel hydraulically . The wheel is turned by the device
and the internals of the wheel is measured by ultrasound, after
squirting on water as a contact material. Measures the reflection
and absortion . Othere systems sit in the rail itself. So a matter of mounting
sensors in the rail to measure the wheel via the firm contact point
between rail and wheel. An electromagnetic field that measures surface
and for a thin layer also interior structure. This has been done experimentally
but whether the right material can be found for the sensor holding
rail replacement section, to hold the whole weight of the train
and small contact surface , without getting damaged in multiple useage.
What are we looking for. Rolling Contact Fatigue, using steel on
steel and rolling one of them , you get slight misshaping of the steel.
That results in small cracks across the tread, less than 1mm in
width, up to about 2cm long. If that is detected then the train must come out of service and be reprofiled.
Wheel-flats, often happens ith leaves on the line. To detect this, they use a
wheel impact load detector. A piece of rail, the train goes over, and if the
wheel does not turn smoothly, a clatter noise, and the detector picks up
those impact noises. If it gets really bad you end up with a
red-hot glowing piece of wheel.
Another defect is hollow tread, the 1 in 20 slope of the tread , hollowing
means the train has to come ou tof service. A difference of 2mm
from true profile, the train has to be taken out.
Cracks from failed material or from heating, very fine cracks.
Fine surface cracks develop into bigger cracks, if detected ,
the train must not even be moved, must be skated into a depot.
Flaking happens a lot, cause by corrossion and also by sand use.
Must be reprofiled if detected.
Flange defects, toe-radius build-up , where metal is pushed up
against the flange and builds up. Often goes along with thinning
of the flange. With thinning, at points the wheel does not slot in
easily , dangerous situation.
Q: When you say pushed is that almost a liquid steel state?
Over time , working metal , by pushing hard enough , then it
Ultrasound , sending an acoustic wave into material.
Magnetic particle inspection where a liquid with dissolved ferro-magnetic
particles within. Apply it to the surface and apply magnetism or electric
current, and there is a small crack, the magnetic field is not
continuous in the material , the ferro-material will accumulate
in the crack and you see a black line, contrast between no crack
and crack area. Its messy, has t o be cleaned up after.
Ultrasound, used wit a gel for coupling the sound into
the material. Can be used without contact gel using an
electromagnetic acoustic transducer.
Eddy currents, electromagnetic currents in the surface.
Radio frequency impedance - smooth material givves no
reaction to applied field , but little cracks can act as
little antennas giving secondary fields at different
frequencies . Then the interpretion of what these measurements
mean in terms of damage or in terms of geometry.
Trying to reconstruct how and where these anomalies are coming from.
Do it properly and you can image 3D properties within the material ,
much like baby-scaner images.
Standard ultrasound uses a coupling gel, messy. Electromagnetic
acoustic transducer uses a magnet staic or electromagnet
and a coil underneath. By pulsing the coil , induces eddyy-currents
in the material, that then creates a force on electrodes ,
Lorentz force, causing an ultrasonic wave inside the material
as standard. Does not require direct contact and gel.
Disadvantage is the signal to noise ratio is difficult to
handle due to the small signals to detect.
We are looking at ways of automating condition
and measurement systems. Engineers working in this industry, having built
up many years of experience , are getting rarer with retirement.
not enough engineers coming through to make up that loss .
At the uni , whenever we have graduates coming out they
immediately get jobs. There are about 80,000 graduates
needed annually that are missing in this and other similar industries
just for the UK. Some say up to 130, 000 engineers short.
If you are an engineering student these days, you can pick where
you want to go after your studies.
Cost saving is always a factor , turning round trains quickly.
You want to make maintainance intervals as long as possible without
losing QC on the wheels. This checking process is time concerning , with
a number of people going around individual wheels.
If that could be automated , 4 to 6 trains a day, the
annual savings would be about 75,000 GBP.
Consistent measuring accuracy is a factor, doing this manually,
shown up in a number of reports, the reading repeatability is very
low, different measurements on different days for the same
person , and differnt people measuring differently.
An automatic checking process should be mor e objective
taking out personal judgement on where a gauge is fitted etc.
If you can capture all that data, the more data you have
and analyse, defects and their detection in future.
Loads of different ways of perhaps automating this.
Roll along a 90cm wheel , you need a length of about 3m.
If in a depot the speed is max 5mph , about 3m/s.
So 1 second to record the detail of the whole circumference.
We CAD analysed different sensor systems and attachement
arrangements . Many depots have inspection pits under the
track. With an inspection pit its easier to attach or install
something that is automated to rise up and do the
inspection. We'ce done 3D acoustic analysis of how waves
travel tthrough the material, reconstructing o nthe inside what
we could see from the outside, extracting the interior picture
is quite a challenging process.
A curious early experiment with a bike wheel. Capturing a rotating
bike wheel , at speed , stitching the pics together for 1 long image.
80,400 pixels x 1920 pixels.
With captured data its always possible to return to it.
Our latest staff member Baxter , a robot type used in industry
in an open-sourc erobotic environment and OS. This one has
6 axes on each arm, with grabbers and sensors and can go
There is a huge amount of software out there , all written in Python.
To me what was amazing with this project was to see the scale
of engineering that goes with railway systems, the things you don't
see as a passenger. The quality of just maintaining wheels is
Is it just the UK where goods trains and passenger trains are
entirely separate, or is this universal, why not clip a goods wagon or 2 on the
back of a passenger train?
Passenger trains have to observe a very strict time scheduling, goods delayed
for an hour is no great problem.
Freight trains are heavier and on some lines have speed restrictions .
I'm surprised we are still using 200 yearold technology, same gauge,
steel wheels on steel rails. Any new wheel technology around?
Standards have changed. Over the last few years the grade of steel
has changed, the wheel profile has changed, P8 now, previously
P1 and P9. The grading at manufacturing of the steel is very
closely defined now. All wheels ,now, after manufacture are
ultrasonoic tested before going into service.
I was thinking rubber tyred wheels perhaps?
There was a big accident in south Germany , due to failure of
a rubber wheel. Base steel wheel, rubber on the outside and then over that
a steel tyre. Used for damping the vibrations from travelling fast
, one of those rubber sections perished or something and the steel rim
came off . Steel wheels ar ecrude but reliable, you can take material
off them and still a solid steel wheel.
There is also steel wheels with a steel tyre on them, so the tyre can
be reprofiled or replaced when worn or recut too low.
There is only a small number of wheelset types allowed, its mainly
down to the grain size of the steel..
I beleive for scheduled mainainence of helicopters, there is
permanent recording of noise in servise for any long term changes
in vibration and noises, is there equivalent for monitoring passenger
coaches? prehaps there is for the engines and traction systems.?
Recent technology developed at Chillworth Scienc ePark by Perpetuum
, vibration monitor. It harvests vibration for energy.
It sits on the axle , vibrates while the train goes along,
like the watches that are self-powered by arm movement.
That energy is stored and at the same time it monitors the
frequency of vibration as tthe wheels go along. If you get a continuous
additional frequency, rathe rthan a temporary one from
going over sand or something, it will detect that and
inform the train information system that then records that.
Yhat data from the train can then go live , via mobile
phone system , or wifi, to the maintainance engineers
for assessment of leave the train in service or take it out.
On cross-rail all those eventual trains have something
like 8 or 10,000 monitors on them, continuously
monotored for all sorts of things. Like aero engines are
these days. ? Then loads of data to process through
If there is a call for doing this, you either get cost savings .
For aero engines these days you don't buy an aero engine
, you buy the power and you only lease the engine when it has power
in it. If it breaks down , you stop paying. The manufacturer wants
to make sure that engine keeps running all the time.
Same with cross-railtrains, if they break down, you stop
renting, until they run again.
It makes sense to have sensing systems on the train rather than
The track is mainained by Network rail, the trains by
train operating companies , but they are owned by rolling-stock
companies. So 3 different companies involved and a lot
of discussion going on now about who does what.
The yellow trains that go round measure the tracks, the whole UK
track is monitored by them continuously, recording the state
of the tracks. Some such trains can reprofile a bit of line
or its necessary to cut a piece out and thermite weld a new rail in place
and then polish the tops.
Any advantage in replacing the chassis with carbon-fibre for
The Siemens 700 series , they are very light, a lot
of aluminium but also a lot of plastic, could be carbon-fibre.
But its expensive if large areas of the fibre.
Is there an addiction to an old style of engineering?
Steel is better in a fire situation. Thousands and thousands of miles
But everything is steel, the footbridges look strong enough to
run a train over it?
We do have new materials . Perhaps othe rcountries can
create new versions of traditional structures, easier.
Go to Japan and bridges are made of bamboo and othe r
different materials, designs that look good and last.
If something is established, we know it works, for using new
materials there is often extra costs for changing manufacturing
methods. Vesper IoW wind turbine makers of very large
strong plastic structures probably could say we could
make such a bridge, but they'd have to change their processing
Sometimes people don't want to change. eg rockets stayed much the
same and then along came SpaceX , mor eefficient
engine, better costs as re-useable. Everyone stands back and
says why didn't we think of that. Trains seem to be stuck
i na rut?
Tesla similarly. Home batteries for solar cells to store
The sand business , is that used a lot or continuously?
Its just used to get moving, blowing sand under. The driver i think
gets a flashing light if they loose traction and then they blow
sand out. Once you are rolling you don't have the friction .
For cars there are all sorts of fancy traction controls, if a wheel
starts to spin.?
Used on trains, not that I'm aware of. There is refgenerative
braking a lot these days. Put the brakes on effectively
puts dynamos in the system and generate electricity
That goes back to the third rail or pantograph or stored on board?
On board I think. They have huge batteries about the
size of this pool table. If they loose power , at crossings or
going through stations where there is not necessarily a third rail.
Pantograph contact is not that continuous either.
Monday 13 Mar 2017, Professor James Anderson, Soton Uni [third return visit ] : The Mathematics of Fractals
33 people, 1.5hr
There is the old saying that one should not drink and derive.
I'll try to get across what we mean as mathematicians , what is a fractal (F).
I'm not going too deep into the maths. I'll work thru 2 basic definitions .
A mathematical step that demonstrates a repeating pattern at every
step and every scale. In a loose sense I have some thing , if I take a small
piece , focus down on that small piece and given infinite resolution,
I blow it up, the result should look like what I started with.
The simplest thing like that is a line. A straight line on a piece of
paper, take a tiny piece of the line, expand it, still something like a line,
repeat and looks like a line. Thats fine but we don't want to think of a line
as a F object. A line is too simple an object to think of as a F.
So we have to be careful with such definitions as looks similar on
any scale. Now to engage your imaginations, the Sierpinski Triangle. A big orange triangle thing, cutting out a middle point, the mid point
of each of the 3 sides. Drawn a triangle between those 3 and cut it out.
That left me with 3 big orange triangles. For each of those I do exactly
the same. Then every time i see an orange triangle I take the middle 3
points of its 3 sides, join them together and get a little triangle,
colour it white , same as cutting out. Just keep going. What F can
make a bit headache inducing at times , is what happens at the
"we just keep going part". The result is an orange regular spider-web like thing.
That is an example of a F. Blow up any piece of it and it looks exacly likke
the original. Fs look the same on any scale, no matter how tiny
a piece you get , on blowing up , you see much as the original looked.
This is a very regular sort of construction. Simple is to take a
piece of line, remove the middle third of it , and I'm left with 2 pieces of
line, each of those remove the middle and I have 4 pieces of line
that are much shorter. Keep doing that , over and over again,
and I get what looks like dust, scattered on the line, known as a
Cantor Set, the middle thirds Cantor Set as I'm
removing the middle third of each. Georg Cantor was a great M of the late 19C
. He came up with things that drove him insane and rendered himself
an outcast in the M community , until we realised he was doing
everything that we fundamentally wanted to do.
He wanted to get a handle on some of these things, how we get a
handle on them.
Another example, doing the same thing repeatedly at smaller and
smaller scales , never stopping. Start with a triangle, not caring about
the inside, just caring about the boundary edge of the triangle.
Repeating something over and over again to get something thats
fractal. Instead of removing the middle third, I replace it with 2
sides of a small equilateral triangle, replacing the flat middle of a line,
with something pointy. I now have 4 pieces of line,
each shorter than the original line, but every time I have a line,
I can do the same thing. Just keep doing the same construction.
At every step, I get something that looks more and more
complicated, fairly quickly. If I can do this infinitely many times,
take a small piece of it and blow it up, I will see exactly the same.
What we call the Koch snowflake is what we end up with many many
times. In areal sense its impossible to draw .
This is where maths separates from the real world. When we do something
infinitely many times and we get something in the limit, which we may
, with any feilty, be able to draw in the actual universe. The actual universe
is fundamentally lumpy , its quantised, its not a continuous
thiung. We Ms would love everything to be continuous.
We can do that same construction all over the place.
So taking the surface of a globe, not the whole globe, I
don't care about the inside, just the surface. I remove a bunch
of big discs, a round bit, remove smaller discs from what remains,
and keep removing smaller and smaller discs. Its harder to
see the regularity compared to my earlier examples. Its harder to
understand the rule that we're using, to remove things. But not
as simple as just removing a middle triangle. Here we don't have the
seeming regularity as with triangles. The field in which i do my
research is a field where we generate fractal objects.
To try to get a handle on this general way of doing things.
The basics is ,things that look the same on every scale. One thing Ms have to
do in our work , we have to define what we mean by things in a fairly
precise way. How do we define "the same on every scale" or similar
on every scale to allow a bit of fuzz. There is some formalism , some structure , to what we mean by same on every scale. I won't tell you what it is ,
as its kind of complicated , I just want you to believe me when I
say , there is a way of being very formal , in a very precise
M sort of way. Iterated function systems is the technical phrase.
We generate things that are properly fractal objects and we get
some nifty pics. In this image, the boundary of everything I
can see is just a circle. You're generating an object
that is very real a fractal but every boundary is justr a circle.
Is that jagged enough to be a fractal thing? because its nowhere near
as jagged as the Koch snowflake when it gets done, incredibly jagged thing.
Appearance depends on what you want to mean by fractal.
For me there is a very precise definition, not that everything looks
the same on every scale . The classical Mandelbrot set object.
A much less regular object than we've seen so far. So the first thing we
can ask is does that thing satisfy the definition of looking the same
on every scale. Take small pieces of the MS, take small pieces and
blow them up , it does not look exactly the same but very
much like the whole thing. How you build a MS is an interesting
juxtoposition of complicated indices?. What the colours are , referring to
speeds of how far points are moving.
Go to you-tube and you can see where someone has taken a point
and just zooms in. Zoom in at a constant speed and you see things that look
almost like the MS , appearing, no matter how deep you go.
So the same basic shape, keeps repeating, and you can find it on the
smallest scale that you want. An unusually shaped object but
you can find copies of it on very small scales and work it back to the
definition of things looking the same on every scale. You need a
loose definition of sameness , to make that. I don't actually like the
definition of things looking the same on every scale.
Go back to the Mathematics of Nature by Benoir Mandelbrot .
It did appear in a paper by Brooks and Mckelski a few years earlier
but they only had a crude line printer and so you could not get
an accurate picture ofall the complexities, they had the M underpinnings
there, but Mandelbrot was a better expositor of M.
Whole positive numbers and 0, ignoring negatives for the time.
The numbers with which we count apples etc. A line is a 1 dimensional
thing. A niaive way of thinking of dimensions is as the degrees of freedom,
how many different directions can we move. On a line its back and forth one
way. A flat table top I can move in 2 directions L,R, forward back.
If I start at one point I can get to any other point purely in those 4 terms.
For a room I can pick a point, then go foreward or back , your foreward and back is diiferent wrt you , L or R and again your L and R is different,
upand down which is the same for you, but it is a pub and it is early.
For a room I need 3 directions . We can think of time as the 4th dimension
and colour being 5th dimension, all sorts of notions of dimension that
we have. What does it mean for a thing to have a dimension that
is not actually a whole number, a different sort of dimension.
Mandelbrots book came out in about 1982 and thats where
we come across non whole number dimensions. He starts
with a question - What is the length of the coastline of Britain.
It depends on how you measure. Take a crude map and takle a
piece of string along the coast, measure the length of the string,
account for the scale of the map and get a number.
If I walked along the beach , trailing a piece of string behind me , do
I go round every small rock, do tide pools count, high water,
low water. i get a curve that looks very jagged and as I refine the
scale on which I'm operating , the length of the coast goes up.
The finer the scale, the longer the coast , as I start working around individual
grains of sand, even working round things that are too smal lto
be seen but still require going around. Mandelbrot said that sometimes
when I'm trying to measur ea thing , using a whole number dimension
is'nt going to work. What is the zero dimension of something. I could
count the number of points. 5 apples in my kitchen, I could count 5.
I could take the length of someting, so 1D, using a length of string , perhaps used repeatedly.
Take something flat like a
pool table I've got area and now how to figure out the area of a
square and I can figure out how many squares to fill my object
even if sometimes I only need parts of squares. For 3D Ihave volume
and can start with a cube and how many do I need to fill up the
space, and sometimes I'll need parts of cubes if close to an edge.
But what does half dimension look like , or log4/log3.
Go back to the Koch snowflake . An equation.
Step 0, the number in front of the colon is the step we are
on. Its my starting point, an equilateral triangle. I make the assumption, it does not matter,
I make the length of each side 1. Adding up the 3 sides I just get 3.
Step1 , each side of the triangle, I've taken away 1/3 of the length and added
2/3, now 4 pieces each of length 1/3, done 3 times as 3 sides.
We now iterate, do it again. At the next step I'd have 16 pieces
on each side , breaking into 4. For each as I start with a length 1/3
each has length 1/9 , replacing the mid with 2 others. 1 piece
becomes 4 pieces but each is 1/3 the original piece , so I
get 16 pieces each of length 1/9 and for all 3 sides of the
original triangle. We just keep going again and again.
Then at step n , for each of the 3 sides I have 4^n pieces
, whatever n happens to be, n is the number of steps.
Each of those has length 1/(3^n) because I keep breaking things into
thirds. So what is the length of the Kock snowflake, using our
usual measur eof lengh a ruler or piece of string.
I started with something of length 3, after n steps I have something
3x (4/3)^n. The problem is that (4/3)^n , as n gets bigger
, it gets bigger and it gets bigger quickly . The Koch snoflake
is not built until I've gone through all possible ends and the length
in the end is infinitr. So if I used a normal measur eof length to
figure out the length of the Koch snowflake , I'd have something
with infinite length , that I could still draw on a piece of paper.
My normal notion of length is not the way to measure the size of the
Koch snowflake. It looks as though it should be a 1D thing, building
it by starting with a line, but as I build it, cutting and replacing
with more spikey pieces, in the end, I get something for which
length does not really make sense. So 1D is not how I should measure
the length of this object, I should not use the normal notion of
length. But I can't really use area because essentially all I've done
is built a curve, as I'm ignoring what's inside. I'm just looking at
the edge, so should not have anything 2D about it.
So thinking about the size of this thing, its not 1 and its not 2.
So it has to be something in between. Thats where we start to look at dimension
not being an integre. What is the appropriate scale to measure size
and that is where fractal, dimension that is not an integre comes from.
There are lots of ways that we as Ms have come up with to measure dimension.
There is topological dimension, Biscani? dimension , Hausdorff dimension named
after Felix Hausdorff and Minkowski Dimension , similarity dimension and lots of others.
For a reasonable object like a line , they are all the same, I would get 1.
For a nice object they are all the same. You could define a nice object
,if all these different notions of dimension are the same.
Thats what Ms like to do all the time, flip things on their heads.
Q: Is nice a proper mathematical term?
It is now. I just said that to the internet so it has to be true, right?
The most commonly used notion of dimension is Hausdorff Dimension.
Its hard to create the Hausdorff dimension of a thing because you have
to do a lot of stuff. I pick a number D for dimension , a guess as to
what the dimension might be. Thinking of it as a variable, no assigned
value yet. I cover the object that I'm dealing with, The Koch
snowflake or the Mandelbrot set or whetever. I cover it with discs,
round things with a centre and a radius. I might need infinitely
many of them. For each way of covering my object with round things
I calculate a number. I take all the radii of the discs, I raise each to
the D power and I add them up. I'm burying something here, how do we
know that adding up infinitely many things , gives us a finite
number. I won't worry about that, sometimes it does , sometimes
it doesn't. If it doesn't then D is a bad choice. I cover my object
with all these discs, so I can no longer see the thing any more , for
all the discs. I calculate this number. So covering my object with plates,
I get a number . Now I take a different way of covering it with plates,
perhaps smaller or bigger plates, or even microscopic plates.
Every way , and the every makes it hard, I cover with plates, I
get a number. Then I take the smallest possible number , out of all
the possible ways . There will be ways of covering with a finite
number of plates, there will be ways of covering with infinitely
many plates. If you pick a nice sort of object, which not all of
them are, every way of covering with plates will have a finite
collection of plates, still covers, but not all the time.
For exceptionally nice objects (again we'll call that a mathematical
term) you get a finite sum. Doesn't work for a line but does for
Koch snowflake and the Mandelbrot set (MS), the Sierpinski curve, it
works with things that you can actually draw.
When I look at this quantity and see how it changes as I change D
weird things happen. When D is small the quantity is infinite, when D
is big, the result is zero. There is a single point in between where it
jumps and where it jumps is the thing we call the Hausdorff Dimension (HD).
I'd be comfortable teaching this area to second or third year undergrads.
I buried a vast amount of material here, what I as a M would find
interesting but over the years I've realised not everyone finds it as
interesting as i do. There is a way, a formula foe calculating the Hausdorff.
For the Koch snowflake its HD is log4/log3 which is somewhere
between 1 and 2. For the MS and just its edge , we think its HD is 2
but we don't actually know. As far as I'm aware its still unsolved.
A structure that looks a bit like wrought-iron work or a Paisley
design. The sort of object I work with on a daily or weekly basis.
Its a HD 1.3 dimensional thing. Its built out of a curve , it has the
property , blow up any piece that looks flat at this scale I would see
the same thing .
How to build a MS. Firstly involves complex numbers.
We have our ordinary numbers, we know how to add them , multiply
them, the distributive laws make sense, the things we do with brackets
all that stuff. Complex numbers are just a bigger set of numbers, expanding
my horizon. Now I'm doing as we did with numbers but as points in the
plane. Instead of being numbers that I'm adding or multiplying points in the
plane. How can I tell where I am in the plane, a 2D thing. I pick a point
I declare to be my origin , then A is my left/rightness and B is up/downes.
That tells me where I am in the plane once i set down my co-ordinates.
There is a way, I won't tell you how, I'm just doing what is called
complex arithmatic, here as just simply points in the plane.
i take a point C in the plane, then I do a complicated process, it starts
by taking the function
f(Z) and send it to Z*Z + C
C is fixed for the moment. I start with 0, get 0^2 + C = C.
I take that C and stick it back in, so f(C) is C^2 + C
Take that and plug it back in so I get (C^2+C)^2 + C
just keep going and 1 of 2 things will happen , either I will stay not
too far from 0 or I will shoot off to infinity.
If i stay close to 0 , I colour C black, if i shoot off then I colour C
white. I do this for every single point in the plane , this is the MS.
The MS is what you get when I do this operation and stay close to 0.
The rule to generate the MS is very simple, shoot off or stay
close to homwe. If I stay close tohome I'm in the MS,
if i shoot off , I'm not. The interesting thing is, this is a very
simple rule but gives an incredibly complicated result. Because
the jaggedness of the boundary of the MS is saying I can have
2 points however close together , which bwhave differently.
No matter how close a point is, it does not tell me what will
happen to that close-by point . Thats what makes things
fractal, I don't have that sort of control.
Going to you-tube and where they do the zooming, its that same
calculation at greater and greater resolution, mor eand more
digits. But the fact you keep seeing the same picture, the MS
however smal la scale, means that this very simple rule is giving
you incredibly complicated output. This sort of thing
bedevils us all the time. There ar elots of things where
what we start off with, may or may not be complicated,
and we have to decide yes or no a tthe end. What we get at unis a lot
is degree classifications. A bunch of things they've done in second year
, a bunch in the third year and we have to decide the classification
for their degree. However wel lyou try and set the rule you'll
always find yourself in a situation where students have incredibly
close results and wwill get a different classification.
Its not a problem with the rules you're setting , its the natur eof the
fact you're trying to take a lot of complicated things and divide them
into 2 buckets or 3 buckets.
I had a project, that regretably I never finished. Someone in the law
department, how the law deals with things like murder or manslaughter, people killing
people , how fractal such things can be in some sense.
How can you make sense of complicated interactions that we have
where this thing or that thing is the outcome, guilty or innocent.
And what the boundary is between 1 side and the other. Often
fractal properties will develop.
One of the things I've learnt from fractals is you get complicated situations
arising from simple rules , where it doesn't matter in the end how you
try to separate where 2 things are close together , you should
get the same answer, because you will never be able to do that.
There will always a situation, no matter how you set the
rule, you'll be able to find close situations that get very
different answeres. Because that is part of the nature
of fractalness, very close things having very differentr
outcomes or behaviours.
Is lightning fractal?
Many phenomena have fractal aspects. Look at Times series of
stock market prices, very jagged curves, those are fractally things.
People have tried to say , is the fact they are fractal , does that
give us a way of handling them and trying to predict the future of
stockmarkets. The answer is no, but people have lost a lot
of money trying to do that. If you look at how lightning
is formed, of electricity going, splitting , going , splitting.
Its about as fractal in the real world as you could possibly get.
If a lightning bolt hiys the right sort of ground, it will fuse the
minerals in the ground and it will carry on the same root-like
structure as in the air. The resulting mineral is called fulgarite, hollow
root/branch structure, underground , maybe extending 10 metres or more.
Booles maths ended up being quite useful. Can fractal analysis
be useful in any discipline?
Someone called Eudice Shramm? did a lot of work on
fractal type things. He was working for Microsoft, he was trying
to understand randomness. There are things we'd love to do
by generating random numbers. Usually we use a Pseudo Random
Number Generator, using properties of integre arithmatic on a very
long scale to generate have properties of being random ,but run it
long enough , you end up in cycles.
He did a lot of analysis on fractal objects , the random walks, trying
to model the stock market. You have to understand fractals to make
sense of things, to understand what random actuallly means.
If you could properly create randomness, there are huge applications
for doing random things. Ways of evaluating things, by collecting random
points and seeing what happens over those points.
Its not as strong a connection as in other maths areas.
So its just pure research?
At the moment I think they are. I have to say this as a pure M,
its unlikely anything I've done research-wise will turn out to
be useful. But i'm not saying it won't ever happ[en, it might
take a while. We're laying the groundwork for others to
use. At the time differential geometry Bernard Riman mid 1800s
, he was just doing it as a thing . It is the basis for Einstein's
Theory of Relativity, about which Rieman would never have guessed.
Tomorrow someone may use fractal things to do something
When someone jumps out of a plane with a parachute over the sea, they cannot
apparently tell how high they are as the waves look the same from all
heights? Does that mean that sea-waves are fractal?
I suspect they do have fractal aspects . Big waves have little waves on top
of them and smaller waves on top of those. I guess the ocean surface
at any frozen moment of time is a little bit more than 2D.
Do you think it might be possible to predict wha tthe sea is doing,
using fractal maths?
If you're trying to picture what is happening at any moment,
then probably not. As a sort of time average, on a larger scale,
thrn more predictable generally. Trying to follow an individual
particle thriugh things , you can't figure out what its doing,
but look at all of them, at once, then you can make reasonable
Go down that track and you will start finding submarines at depth?
I was reading over the weekend about the possibility of using
quanum theory , to detect submarines by looking at incredibly
small variations of gravity, caused by the fact the sub is not made
of water. Both fish and subs are neutrally bouyant but Fish contain more water in comparison to a sub.
So the MS might have a dimension of 2, does that mean its
not a fractal?
It depends on how you define fractal. This wiggly object image is just
the boundary of the MS. Boundaries of things in the plain you
expect to have dimension 1, if it has dimension 2 , then we'd make
an exception for it.
I was wondering about its application to biology, neuron
growth or arteries ? They start at some point, decide
whats around them and make a decision on that.?
Possibly, i've never thought about it.
I suspect someone has considered that, but I've not
encountered it in the literature. I can't read as widely as I'd
like to , as time is finite.
Is time a fractal?
You go into Einstein's theory of relativity . You touch a hot
stove and a second feels like a minute, etc. I don't know.
Can it be used on the expansion of galaxies in the universe?
One of the basic questions, as I understand it , in physics,
that they've not yet resolved. What Einstein called the Theory
of Everything, how the M we use to understand the very small
, QM and the M we use to understand the very large, general
and special relativity. What happens in the middle,
can they be brought together. I don't know if galaxy
expansion use this sort of thing. Some collegues model what we think
is going on in Neutron Stars in terms of magneto-hydrodynamics,
magnetism and fluid flow , are coming together. Its entirely
possible they are getting fractal effects there, because they are trying to model
what it sounds like when 2 black holes run into each other.
To predict the wave signals we would see from gravit ywave detection.
I'm interested in the fractals that are evidently self-similar on varying scales
and the ones that are almost, like the MS where parts become a
bit squished, they are not quite the same. THe MS looks more interesting
because it is fundamentally differing. ?
For me it goes back to how they are constructed. Things like the Sierpinski
Curve are constructed regularly, you're imposing a regularity at the
beginning, that gives you this object. for the 1D equivalent of the Sierpinski
curve, you'd take a line, remove the middle third, for the remaining bits
remove the midle third , you get the Cantor set, the set of dust.
The fact I'm removing thirds is irrelevant, I could remove a random
section out of each interval , at every stagr, I'd still get a fractal
object but I'd have no idea what its actual structure looked like
, if 1/10 out of here, 7/10 out of there , doing random things, as I went down.
A different fine structure just by having a non regular
construction. For the regular objects we casn calculate things about them.
As we know the HD of the Koch is log4/log3, comes from the structure being so
regular. With the snowflake I could say at every point I see a line , I just take a
random set in the middle pf the line, I'd still get a fractal object but I'd
not understand it, nearly as well. The MS is the sort where we have a regular rule
but don't understand it as nearly as well. Fo rme the beauty of the MS is its coming
from the initial rule being simple , but we don't know what its doing at
each individual point, as opposed to the very regular initially constructed objects.
From regular structures , very calculable to structures where we have a vague
idea of whats going on, to knowing something weird is going on but we
can't get our hands on it.
Is it possible to use the re-entrant equations to do rendering ?
If you take any polynomial , you can do the same sort of thing, you generate the
Julia Set by using the same basic idea, take a point , following it by the
iteration, either it goes off to the distance,white, or it stays bounded ,black.
Some of the shapes that have been in the backgrounds of pictures here, are
Julia sets of particular polynomials. For each polynomial you'll
get a pic. The MS is just a very particular case of it.
The fact you are doing it in 2D is just because that is what we can draw.
If I take any dimensional space and I take a function on that space,
start iterating and following points, I can build something. The first
demo of mathematical chaos , due to Karl Lorentz using a weather
model. There isa fractal object sitting in there as well. At a stage he had to
reload , the next day, what he'd been working on previously and rerun
it and got a very different pic. He realised the very small differences you get
by truncating things at 10,20 or 30 decimal points , were having a massive
effect on the outcome of the system. So Lorentz and his butterfly
were the first demos of this deterministic chaos in systems.
Its just that 2 is the dimension that we see. Drawing in 3D is hard.
You've drawn some analogies between fractals and every day life, randomness
and Darwinian evolution , they seem mor emetaphores than mathematically rigorous.
What are the practical uses of this M, some practical examples?
Nope, not being flippant, but I can't.
Prime number theory is used every day , Amazon purchases etc?
Number theory, graph theory which is what I teach at the moment, very practical
applications . Network theory, how you route things through systems of nodes/roads? etc
, fractal analysis is something that its primary use is giving us a language
to describe things rather than a way of attacking problems.
Its not developed to the likes of number theory where we can go from
understanding numbers to building unbreakable codes. i think the closest
we get is trying to understand notions of randomness, still rather
esoteric from a procatical point of view.
There are times I'd like to view myself more as an artist than a M.
Why should we care about fractals?
We don't know what someone in a few years will be able to do
with some of these ideas. Part of what we do in math is saying , here is a
practical question. One of the guys in the applied group of the dept had a
grad student who was looking at the M of air-bubble formation
in crumpets. You want all the holes in a crumpet , equal sized .
Actually a PhD thesis of a quastion. not everything we do is immediately
practical. We're exploring what the universe of math is telling us.
After us, others will latch on to particular bits of our math,
to answer their questions. Not every bit of math we do, will find a home.
Some will but not necessarily in our lifetime.
We just don't know what the useful math of 20 years from now , will be.
Some people are doing very practical things, and some are doing the
exploration of the universe of the possible, so when people need
tools , there are tools available to them, to do the great thing they are
trying to do. Without this, there will be people of the future
asking quastions and there won't be anyone to give them an answer.
Can you do this backwards, something that looks like a fractal , can
you work back to the underlying math?
Yes, sometimes you get some very interesting things through it.
So look at neuron formation , what are the processes underlying that,
can we get a handle on that, can we use that to understand things .
The early people to look into iterative function systems, without it
specifically wanting to build a fern , give it some rules on how to
build a fern, a different set of rules to build something else.
Bu tunderstand what it will build , from an initial set
of rules, there has been some basic work on that. It gets very
complicated, very quickly, and not always predictive.
With ferns are they truly fractal or are they fractal only u pto
a certain amount.?
From a M point of view, nothing in the physical world is
truly fractal, because of the constraints of the physical universe.
The universe is quantised, its not continuous. When you say scale,
you can get to such a small scale , that you cannot replicate
things to a larger scale. Most people would say you don't need to go that far,
look at lots of varying scales, and see the same sorts of structures
and be able to apply the analysis , that people have developed for
handling fractals, not necessarily being able to go to every
scale , but many scales.
Is there a connection or a parallel with series theory?
Such as the Koch snowflake , taking the infinit elimit , have c;lose
parallels with series. There is basic notions that underly series
and fractals and these constructions, where there is a commonality,
of how we're doing things. There are underlying mechanisms that we use with
reckless and wanton abandon.
For the math derived fractal sets like the MS, I believe the coloured
versions are just assigning colours to the number of iterations. Is there a
more aesthetic process than the lumpy clour gradations , giving a nicer image.?
I think that sort of image is pretty beautiful myself. I think that is just due
to how they set up the grid sizes . I think you can do that sort of
thing but it comes back to the basic question . I have a continuum of
possibilities and I'm putting it in a coup[le of buckets and the
differences where I go from one bucke tto another are going to be
fairly stark. I think you will get that lumpiness of colour,
regardless of how you do it, because you are trying to assign
colours to things. You might get interesting boundaries between one colour and
another but if I'm using 5 colours ., I've an infinite number of things
and setting it into 5 colours, so am bound to get some lumpiness.
Is the coastline of the Uk a fractal image, can you derrive
an iterative function for the coastline?
I think there are estimates that the coastline is 1.2 to 1.3 dimensional ,
but it gets back to this fundamental of one scale or many scales.
Its lumpy but not as lumpy as the MS.
For the Cantor Set, taking out the middle third and the next iteration, you
take the middle third out as black bits and fill in the middle third of the
white bits, you end up with ever decreasing dashes . There is clearly a
difference between these 2 approaches and again for the triangle,
empty triangle in the middle and put a filled triangle in there , instead
of punching out, it iends up looking much more regular, than the
fractal set. ?
THe Sierpinski set is very regular loking. If you're
doing a slight variation of that construction you'll probalby
get something that is differenrt, but still very regular, in the
same sort of way that the Sierpinski curve is regular.
You can do all sorts of variations on each of these constructions,
give yourself a finite set of possibilities, pick one at
random , do that thing at each stage. Whenever you have that
finiteness of a set of possibilities, you have a possibility
of getting a control in the end. When you have an infiniteness
of possibilities , that control becomes more difficult and the
calculations become much harder.
Is there a way of looking at your original function , produce an
image from that, but predict the degree of curviness or holeyness?
No . If you look at just quadratic polynomials some of them ,
the Julia set, would be a nice connected piece, for others it
won't be . You can get a huge range of diferent sorts of shapes
coming out of that.
There's no overall determinism for that?
There are people who try , I don't think they've succeeded yet.
Its complicated. Its not clear what information the degree is
actually containing , that gives you control of the outcoming
object. Its one of those questions we've not completely resolved.
So veering into the chaos sort of direction.
You dinner-plate Hausdorff dimension determination. If you took the Koch
snowflake , presumably you will need an infinite number
of circles to cover the triangles?
No , using stuff i haven't told you anything about. When you have a set
like the Koch, it has a property know as compactness. For compactness, I
cover it with an infinite number of plates, there is a finite set
of those plates, that still cover it. Compactness is a hard and
slippery notion. The difference between a Koch snowflake that
is contained within a paper sized thing and a line which just keeps
stretching out. Compactness is trying to capture the fact that it is
contained within a sufficiently large piece of paper. Those sorts of
objects have this property, that however you cover them, a
finite set of thos plates will suffice to cover and you can throw
away almost all of them. Id doesn't matter how you do it. As long
as you cover it initially, with your plates, you can find a finite
set. It might be an incredibly large finite number
of plates, I'm not saying 5, i might say 57 trillion but there are a
finite set. The plates have to overlap , to cover the object.
The Mandelbrot formula, how do you come up with that?
Its about the simplect thing thats interesting.
If I did f(z) is z + c, and do the same thing, you don't get
anything interesting, because everything disappears off.
z^2 + c is the simplest formula , do the process, gives us
something interesting. If i used a different more complicated formula, I'd have
something different to the MS , but the same sort of behaviour.
A set where things stsayed in a neighbourhood and a set theat went off
to infinity and a complicated boundary in between.
The MS was the earliest image produced because it was the
simplest thing they could try. The fractal is really the boundary ,
look in closer there and I'd keep seeing what looks like the
whole boundary. The points on the boundary stay bounded , but might
go out a bit and come back, but always keep coming back. But move away
a tiny bit, but in a particular direction and then they disappear.
A feature of fractalness is , however small a change you make,
make it in the right diredction , you get a completely different
behaviour. So not the case of things starting nearby each othe r
end up close to each other, thats completely broken.
Do you think its just an artefact of human perception that we
find these attractive?
Yes. We draw them in niftey ways and use nice colours , it looks
like there is a light behind and its shining out of the boundary.
Part of it is they just look strange, not in a frightening
way, but again thatas a subjective judgement.
Its niftey becaus eits where I've chosen to work so I have a deep
personal bias to these things.
Would you play with them if they were ugly? You've never found
any ugly ones?
What about the zeroes of the Rieman zeta function??
Thats a whole other talk. Its beautiful but its hard and scarey
but there is a beauty to it. Its niot a fractal thing.
Are the numbers that you're plugging into that iterative
formula , integers or real numbers?
Neither, they're complex numbers, points in the plane. I can think of them
as points in the plane, I know how to add or multiply . For each point in the
plane i can do the iterative process. I have a yes/no answer to the
question , do I stay close to the place I started from.
I'm colouring the MS by the answer to that question.
For practical purposes a complex number is made up of 2 real numbers ,
but are they just integres?
They are fractional numbers.
So is the pattern that you get, dependent on the precision of the arithmatic ?
I'm assuming infinite precision, i can do these calculations to infinitely
many decimal places for every point in the plane . In the fractal
universe this is false, but I do this sort of stuff all the time.
I believe they looked into using something like this for compressing images?
I just don't know about that.
Have you put in a function, worked it through , got an image and gone WOW!,
never seen an image like that before?
I'd not seeen that image before (still on the screen) but I still think
that is a wow. One or twice I've done the I wasn't expecting
Towards the end , you had one I'd not seen before, very curved ?
How was that one built (the "wrought"-irony/ Paisley one) ?
There is a way of building things like this, which is , take a bunch
of circles, where the circles don't overlap but they can touch/tangential.
So a string of beads, big or small beads, then start reflecting
in the beads . There is a way of defining reflection is a circle.
Reflect to a line and you just flip one thing over to there . I can define
reflection in a circle, things stay on the same line out from the
centre but just flipping them. Just keep doing that, and you get shapes
similar to this. That one goes back more than 100 years, interestingly.
Way before computers, there were smart guys who figured out
how to do things, prior to computers. Hand drawing such reflections
is quite an efficient way of drawing such as this.
Does this sort of leaf-like geometric patterning lie behind Paisley wallpaper designs?
I love Paisley . I've thought about selling such images but I don't think
my colour sense will lead me into fashion.
Monday 10 Apr 2017, Prof Anneke Lucassen: Cancer Research UK and the 100,000 Genomes Project
20 people, 1.5hr
The incidence of cancer C, in this country is going up.
2014 the last year of good statistics, just over 350,000 new cases
of all types of C diagnosed . The risk is higher in men than women still.
The incidence list has gone up by 12% since the early 1990s , we don't
quite know why. Probably a combination of some environmental factors
, better at detecting Cs which might have gone away by itself.
We're not dying of other things first. Go back 100 years, lots of
us would have died from other diseases before we got old enough
to develop C.
C survival is improving, overall, total average of all Cs
is 50% of people will survive 10 or more years in the UK, that has
doubled over the klast 40 years, due to treatments and earlier catching
of Cs. There is huge variation in survival between different C types.
Certain skin Cs have a very good survival rate, and brain tumours have
a very poor survival rate. They are completely different diseases
and talking about them as one doesn't make sencse.
C is a disease of cells. Any cell that grows uncontrollably
can become cancerous. Skin cancers, leukaemias where blood cells
overgrow and become cancerous . Gut cells can become cancerous , develop
a bowel tumour . Nerve cells can become cancerous to develop a
brain tumour or a glyoma for example. Cell division is very
important in C. We need cells to divide, to grow from baby
to adult . We need cells to divide to heal when we are cut.
And to replace general wear and tear in our bodies.
Every time a cell divides it has to copy itself , copy its genetic material
and a chance that something goes wrong in that copying process.
Whilst I've been talking we've all made about 1/2 million new red
blood cells , to give an idea of the scale. 12 million new gut cells ,
all happening routinely in our bodies. It is routine and controlled, a sytem
of trafic lights around our cell division , saying go or stop
, finely balanced. When that balance is interrupted and the stop
signal is interfered with, for a variety of different reasons
, thats what goes wrong in C. Then the uncontrolled growth
of cells , that then compete with other cells around them. Squash surrounding
tissues or spread to other parts of the body.
Its not allgenetics that causes our cells to divide out of control
but it plays a part.
The influences that can make cells divide out of control , because of
faults that have accumulated in that DNA. Environmental
influences are important, hormones can play an important part ,
eg estrogens and breast Cs a clear link. Take the contraceptive
pill or HRT that has an influence on the accumulation of faults
in our DNA. Lots of natural self-regulations. If you copy your
cells , by dividing , then things can go wrong just by chance.
ur immune system is more important than weoriginally thought in the
developement of C, in particular certain virus infections.
From damage to the DNA, C can arise. I will focus on inheritance,
picking up on the bits that are important and those that are not.
Nearly all our body cells, look into the centre , with a microscope,
the nucleus ,inside that are thw chromosomes which are bundles of
genes together with bits between the genes, the chromosomes are made up
of tightly wound DNA. The DNA is joined together by the DNA letters ,
joining the 2 strings together. That is what we talk about as a sequence of
DNA, 2 billion of those letters per cell, composed of 4 different
letters . Those sequances of code, determine the messages sent to
our body. If the messages go wrong , thats when problems can arise.
The exome is 20,000 different genes, that are sections of that DNA.
The genome is all our genetic material in one cell, all together, the genes
and the bits between.
The word genome derrives from the words gene and chromosome.
Just 1 letter change in all that sequence can be enough to cause
really dramatic changes to our bodies, but it all depends on where
that letter change occurs. All of us have several different
mutations within our genetic code. If those occur in points of the code
that don't do much , then no consequences. Some of those changes
can occur right now as I'm speaking, a mutation in one cell
then copied to the daughter cell. Some of those mutations are inherited from our parents. Inherited in our cells, 2 copies , one from each parent.
Often if you have a mutatuion in one copy , that might disadvantage
you but alone is not enough t cause a problem because the other copy
needs to be knox=cked out. The other copy can be sort of rescuing
the bad copy, or the bad copy over-riding the normal one.
For different diseases there are fdifferences there. For C , often
the case , you might inherit one copy that puts you at a disafdvantage
but its only when the other copy is knocked out by chance
or radiation exposure or something like that, that the C arises.
All C is genetic but not all C is inherited. Any C arises as the
result of genetic faults in the dNA but most of those faults
are not inherited. The difference between inherited forms of C
and chance or sporadic forms of C is if you have inherited
a C predisposing gene , you start off life at a disadvantage.
In order for the C to arise you need more than 1 mutation
or bits of damage to the DNA. There is a required sequence of lots of
different steps to arise before the C starts. Then if you have inherited one
of them , you start off disadvantaged. Thats why i nthe inherited forms of
C we tend to see C at a much younger age, that in the sporadic forms of
C, because they started with a disadvantage and needed fewer
steps to accumulate , before the C arises.
When we talk of inherited Cs , thats not new . Aldred Warsin? described
a family beteen 1895 and 1915 who had very young onset Cs.
This involved bowel and womb Cs, he described it as an unusual
combination of Cs, we now know today as Lintz? syndrome or
heriditary non-polyposis colorectal C and we know the genes that
you inherit t5hat can cause that. But really we've known about this
for over 100 years. And other examples of familial Cs , we've known
about for a long time , from family histories, that there must
be an inherited component, but only in te last few decades
have we found out what that component is.
For breast C an old headline " Her mother died of it , her aunt has it,she has it,
and her 3 daughters" accompanied with the fact that once the gene
was discovered, the test for that woman spared her from
the risk in surgery that she was going to go for, because of her
terrible family history. She had not inherited the gene that was
in her family. The Angelina Jolie effect , she had a BRCA1 gene mutation
inherited from her mother . Her mother had ovarian C at a youing
age and a wider family history of breast C, after had a genetic test
which showed what the cause was in her family and Angelina went
on to have a predictive test for BRCA1 which she had inherited
the same one, and she went on to have risk-reducing mastectomy
and risk-reducing removal of her ovaries.
The demand for BRCA1 testing and the similar BRCA2 gene ,
went up dramatically after her story.
We receive lots of referrals to our genetics service , please test this
person for these 2 genes. A good thing in the sense that she raised the
profile of people who previously were not getting appropriate
testing. But, what many people don't realise, these 2 genes only
explain 5% of all breast and all ovarian Cs. The majority are
explained by other causes. Its not even staightforward to do
that test to find if you are in the 5% category , because the 2 genes
are both very big and the inherited bit can be different in each
family. So the lab has to trawl thru more than 10,000 letters
of genetic code in each gene and look to see if there are any changes
in that gene that have been inherited , that might explain
a family history. We all have those 2 genes and we all have some
variation in those genes and the lab has to try and decipher what is just
normal variation and what is causing the high incidence of
breast and ovarian Cs. The more we test, the more we realise that we find a
variation but it does not mean much. So we have to be very careful
about saying if someone is BRCA1 or 2 positive, because it may be a spurious
red herring finding. I spend a lot of my time telling women ,
intending to be tested, that it is not as simple as they think.
1 in 3 of us will develop a C at some point in our lives and across the
board for all Cs 95% of those will not be due to a single
inherited factor. So in 95% of cases , there may well be an inheritred
componet but that component is very complex consisting of
lots of different factors interacting in ways we don't yet fully
understand. Part of that interaction will also be protection.
One gene protects a bit here , that one increases your
chances or protect in another environment. We just don't know enough
yet to put all that together into 1 algorithm that says with your particular
genetic combination and your particular environmental exposure in
your lifetime, this is your risk of C type x,y or z.
But the headlines make it sound that we are at that point.
The press are more responsible these days but they often mae it sound
, we found a new gene, go to your doctor, get tested for that gene
and you will know or not whether you will get c.
Its not unusual for someone coming to a clinic waving a paper
with a headline like that, can I have a test for these new found genes please.
From a research point of view, finding a new breast C gene is
helpfull as it gives insights into the mechanisms of the disease, but it
often fails to translate into a useful test, unless it is a very high risk
gene. If the new found gene increases your risk over the next 40
years, by 1% , thats not clinically useful test to have.
Similarly for bowel C for example. Its that bit that is not always
conveyed by the media reports.
The is the Kylie Minogue effect. She had breast C 10 years before the
AJ effect. She also had a gene mutation test , but her test was looking at
expression of a [articular gene on her C , so she could
receive a targetted treatment specific to that gene mutation.
Her gene was not inherited , it was the result of the uncontrollable
growth of her breast C . THat was a herceptin gene expression, that meant
she could be treated by herceptin , as that blocks the growth factor
on the cells and shrinks the C cells more than normal cells.
Thats what we are aiming for, targetted treatments.
The testing is easy but the interpretation can
still be difficult. The tech is ther to sequence our code, just like that,
but the problem lies in the interpretaion of the results.
There is a realistic promise there, but the practise tends not to
deliver , like the headlines would imply.
James Watson , DNA discoverer, " we used to think our destiny
was in the stars, now we know its in our genes" . Now we can sequence our
DNA we will know what our future holds. It is more complicated
than that , we do not have the crystal ball as part of this process.
We might do better to remember a quote from John F Kennedy , 30 years
earlier ," the greater our knowledge increases, the more our
ignorance unfolds". In the genomic age, that is very true.
We know more and more, we test more and more, massively more data,
but what that often does is expose what we don't know better
than before we could do that.
In the last 10 years alone a 10,000 fold increase in the speed and decrease
in the cost of genomic sequencing. In 2001 cost 3 billion dollars
and several years to sequence 1 entire genome. In 2017 you can
do that for 1000 dollars , still going down, and do several
in a day. A phenominal scale of change. People assume if you can do it
faster , you get answers quicker . But you gather a whole load of data
and lack the interpretation. |To interpret this, you need to do lots
and lots of clinical investigations, including other
family membersetc, and the overall costs can really rack-up.
An analogy is comparing fishing and trawling.
We are no longer fishing for genes that we suspect are causing
something from a family history or an appearance.
Say we have someone who has something like the appearance of
Down's Syndrome, we know what bit of the genetic code to home in on.
If you tart off , not knowing where the gene may be, trawling the
entire genetic code, you have a cost effective process than for
your single fish. But you get all sorts of fish that you don't know how to
cook, maybe poisonous , old boots, unexploded bombs, all sorts
of stuff analogous to trawling.
In the USA they are a bit more free and easy with their testing
compared to the NHS here. People pay extra money for a broader
gene test, but they don't have any answers. They find risks at most,
when they were expecting answers. Many headlines in the US,
expressing surprise from the people who pile into expensive
testing and get no answers.
The iceberg is also quite a good representation. The bit that sticks up
over the water are the people with a strong family history of C ,
or a specific set of signs or symptoms. They are more likely to have
the strong genes that give strong predictions. That family in 1895
were sticking out of the water. The vast majority is below the surface,
much less tangible you don't know where it is, the weak genes
and environmental factors that interact in a very complex way,
that give poor predictions in the clinic.
We are tackling some of this hrough the 100,000 genomed project,GP.
Its looking at the lower part of the iceberg or looking insode the
trawl net. We are focussing in on a certain group of NHS patients
that are coming through the doors anyway, that are'nt geting the
answers from curren NHS genetic tests. For those people we will
look through their entire genome , 3 billion letters of it ,
and see if we can find anything there that explains their particular
Divided into 2 groups, rare diseases and the other is Cs. The 2 are very
different. For the C patients , we sequence the genome they've inherited
,in every cell of their body and comparing that to the gene of their
particular C. The comparison will hopefully give us clues
where to target as well as how it may have arose.
In the rare diseases , there are a lot of individually rare diseases
but put them all together , then relatively common . 1 in 17
people have a rare disease. If we've exhausted the normal
testing , then comparing (the often) child's DNA
with the parents genomes, might give us important clues.
The whole project announced in 2012, took a while to get
going. A lot of investment, the plan was 100,000 genomes in
70,000 patienrs. In C studies 2 genomes from 1 patient .
13 different genome centres around the UK and several
industry partners , deliberately brought on
board to try and encourage the developement of a genomics industry.
The Chief Medical Pfficer established 3 advisory greoups to the
GP, an ethics group, science group and a data group, importanyly
they interact. I'm on the ethics group , so some intresting insights into
the ethicl discussions about this venture and testing.
4-fold aims. To create an ethical and transparent program
based on consent. This was an offer to patients , they could only
take part , if they were fully informed about the implications.
Bring benefits to patients and bring a genomic service to the NHS.
And be a first in the world to do so. A lot of genomic ventures
around the world as part of research , but within the NHS we'll be
developing this as a diagnostic tool. The hope was to stimulate
scientific discovery and medical insights by doing that
and to stimulate UK industry and investment .
Scotland is now on board as well , and Wales.
Amy has a rare disease- she will give a blood sample which is
representative of her inherited DNA, could also be a cheek swab.
Then if possible the genome from both her parents to compare it with,
to rule out normal variation. If we found something in Amy that loked
suspicious , like a missing bit or an extra bit, then we check both
parents and find one of them also has it, then the signic=ficance goes
down. Wheras if its new in Amy that is much more important.
The more we analyse our genomes , the more we realise that the
variation is much more widespread than we intitially thought.
The is a study in the USA looking at healthy octagenarians ,
analysing their code and they're finding all sorts of mutations
, bits that would predict nasty diseases and they are healthy .
Our ability to predict, from changes in the code is not as nearly as good
as we originally thought it was.
For the C patients, DNA from the normal cells , unless a blood C,
then compare to their tumour.
There are 2 routes into the GP-C . The familial Cs go into the rare disease
branch, like the 1895 family. If you have a C then you go into the
C arm, with a different type of investigation .
With more knowledge about the Cs, then the blunderbus treatments of the
past , can be refined a bit and made more targeted.
You kill of fthe C cells but kill off a lot of other celss as well,
why your hair falls out, you feel miserable . If we can target the C
cells only , then that is far preferable.
The GP project will collect medical details of the individuals
along with the genetic data. That means we cannot
anonymise this genetic information, it is identifiable.
So the data control is really important.
The sum total for the UK is now into the 20,000s , its going
well. Locally something like 2000 roughly. We've relatively few
results at the moment. This is to be expected. 3 different types of
resuls that may come out of this. The main findings are why you'vr
gone intio the project in the first place, then a bunch of additional
findings that are nothing to do with going into the project in the
fisrst place, a sort of lets offer you an MOT while looking
at your genetic code, to see if there is anything else wrong.
That was controversial as to whether it should be disclosed
automatically or whether people should be given the choice or
whether there is a choice about unknown unknowns.
Then some additional findings along the lkline of if "Amy"'s parents
were intending to have more children, then both would be checked to
see if they were carriers of a particular condition, eg cystic fibrosis,
to see if the risks to future children were increased. The controversial
bit about that, was the results would only be given , if both members
of a couple are carriers. If just 1 is a carrier then the future risk
is not increased and that result would not be disclosed.
This project is not a pure research or pure clinical
venture, a mixture. TRhe rules and regulations of both
are very different, causing no end of confusion to hybridise the 2.
The aim to get direct clinical benefits to patients is clearly a
clinical im, its fundamental to the NHS. But the aim to make
new discoveries and understandings about diseases is purely a
research aim, not what the NHS is set about to do.
To develop a genomic medicine service to the NHS is a clinical
capacity building aim and support companies and researchers to
develop new medicines , therapies and diagnostics is very much an
industry & research aim. So a lot of questions about how
someone can consent to all of these , in 1 go, in a meaningful way,
when you've simply come in for a diagnosis. Is it really ethical to
offer someone a complete genome test that might help diagnosis
when they can only take part if they agree to all of these.
An all or nothing project, sign up for all of it or none of it.
So a novel hybrid of research , clinical , service developement
, industry capacity building. Exciting but I think it also
has its problems. We are trying to target drugs to deal with
particular Cs. Say for patient" A" an ovarian C and they have a
particular DNA variation and drug A is developed to deal with htat
situation , not for anything else. Not a blunderbus treatment, focused
on that mutation. Patient "B" has a different mutation that leads to
the developement of drug B . Patient C might have a totally
different type of C or a different location but be the result of the
same mutation. So looking at the mutation rather than the clinical
picture can be helpful to know what drugs to target with.
Or the same brain tumour in 3 children , might have a different mutation
profile in each child, meanwhile 3 different tumours,in different places, in 3 differnt
children might have the same mutation profile.
We are looking at particular markers that may something about that
particular C, markers for particular drug resistance and markers
of particular side effects, they can then be stratified into different
types of patients and each gets his corresponding tablets for their
The use of genetic data and medical records is a topic of great
debate , at the moment. The scandal around the caredot data ? issue
where the govt had to backtrack pretty swiftly , about sharing
medicine info is relevant to this new venture, that wants to
gather data from the population and link that to medical records.
A rock and a hard place situation because , without that massive
sharing process , we will never know the answers.
But with that massive sharing , there are risks of privacy breaches
and how do we allow people a meaningful choice but at the same time
get everyone buying-in. People started opting out in the caredot data
situation , then the data resource is not going to be there
to be useful to future generations.
Big data is crucial to the understanding of the bit of the iceberg
below the water. So its great with a very strong genetic character that
causes a very clear clinical picture, or strong family history ,
but the more subtle interweaving different factors we've gotr to
collect data on a large scale. It may be that national is not enough
from just the NHS is not enough to get statistically significant
data , and we have to go international, and then crossing those boundaries
exposes a load more problems. So how can data sharing be developed
, retaining trust and confidence of public and participants
and that is a moral, regulatory and technological challenge, with
no easy answer.
In my group in Soton , we're looking at the people recruited to the
GP , asking them some of those questions. Through questionaires and more
detailed interviews to see what people think. Some early findings
is what the health professionals and the researchers expect patients
to say isn't necessarily what they say.
Picture of a man walking his dog alongside some water and the dog is in the water.
So should I tell him.
A nice analogy for the genetic code situation . He might know,
absolutely comfortable with the fact his dog is having a swim and knows
the dog is there. Or the dog might be struggling for life.
The issue about analysing sonmeone's genetic code , finding
something out about them , maybe relationships to other
people , raises the same sorts of questions. When you are a holder
of such info, do you tell people , or is it something they
don't need to know , something they don't want to know or want to
know everything .
All sorts of ethical and privacy questions arise , moral issues,
insurance issues and potential minefields. I run a group called
the Clinical Ethics and Law Unit at Soton . We do research focussed on the
ethical issues , raised by genetic and genomic testing and all sorts of interesting
issues about how info is shared within families .
Perhaps you might like to say something about epigenetics and the way
scientists have been humbled after they said a lot of junk DNA
does nothing, and now they find it does do something?
And the ethics of telling people, I had some 23&me test and they have a part
where you can look at it if its serious or not. I wanted to look at it
as you can always adjust your life style with the foreknowledge.?
It can be better to know and it can be worse to know.
If thr eis something you can do about it, the argument is much
stronger. A treatment, an intervention, a lifestyle adjustment
that may change that. There ae bits of your genetic code thay might tell
you are at risk of something , that you can do absolutely nothing
about. It may never eventuate anyway. 23&me does alzheimer
gene testing and at the moment there is no treatment for that.
It might give you the opportunity to say yes or no about finding
out. But when a number of members of a family do that test,
then you have to tnink about other peple finding out.
Were you only testing people who cane to the hospital
or from the general public, as I put my name downcfor it
and never heard anything about the GP.?
Its not the general public , its people with particular
Does it give a bias, that way?
The aim is not to look at the whole population , lets look at the
low hanging fruit, if you like. If we look at the whole population,
we will find a lot of gnetic variation , interesting, but here we're
trying to find new diagnoses.
Epigenetics and junk DNA?
Epigenetics are things that affct the expression of your
genes , without changing your code. So sommething
binding to your code , alters the regulation of a gene, that
is farther down. It might be something sticking to your code
and silences a gene or makes it over-active.
Epigenics is often propogated across the generations ,
such that if you inherit a particular sequence from your mum
, it behaves differently if you inherited it from your
father. The exact sequence might br the same but because of diffeerent
things binding to it, that we cannot stil ltest in a whole genome test,
it will behave differently. There is a rich and emerging study of that.
Originally the GP was to collect what was called other-omic sam[les,
but in practise it has been too difficult to do, its still an aim
, but not happening routinely at the moment.
Junk DNA was a term used 20 or 30years ago , genes send the
messages , when genes go wrong , the message goes wrong - nice
and clear cut. The bit in the middle doesn't do anything , actually we
now know that the bits in the middle are often important agin
in regulating things if something is bound to them . You might get a
promoter of a gene or a silencer of a gene, thousands of letters away
from the gene itself. Only now are we finding what and how it does.
There must be bits of DNA in me that are silent, never do anythiong
but in someone else wil ldo something. Junk DNAd oes exist
, just much less clearly delineated than we originally thought.
That swhere the JFK quote comes in nicely.
The very basics. I'm assuming that C starts from 1 errant cell
but can I also assume that happens quite often but never develops to
2 cell or 4 cell , so epigenetics can come into play in that
early stage. ?
By definition its not a C then, it is not growing uncontrollably.
The pre-Cs may go away by themselves . For example a very coomon
ductal carcenoma in-situ in a woman's breast will , we think,
often regress by itself. But now we are better at screening
for things , its a rare surgeon , who would leave that untreated,
because it might go on to be a full blown C and spread to other parts
of the body. You've got protective factors , control mechanisms
that , may allow things to wrong for a little bit and tyhen
kicks in, and retains control again. THe immune system is vry
important there. The more we learn about it , the more we
realise some of those stop checks and signal is your own body
recognising that the cells have changes so much
, that it looks like its infected, and so needs attacking. A good control
mechanism that needs getting on top of.
So young kids or teenagers , they all could potentially have a C
any day , a number of times a year, but it never develops.?
Yes. If youve inherited a mutation
to be continued
Please make emails plain text only , no more than 5KByte or 500 words.
Anyone sending larger texts or attachments such as digital signatures, pictures etc will have
them automatically deleted on the server. I will be totally unaware of this, all your email will be deleted - sorry, again
blame the spammers. If you suspect problems emailing me then please try using
my fastmail or my fsnet.co.uk account.
If this email address fails then replace onetel.com with fastmail.fm or
replace onetel.com with divdev.fsnet.co.uk part of the address and
remove the 9 (fsnet one as a last resort, as only checked weekly)
keyword for searchengines , scicafshadow, scicafsoton, Southampton Science Café, Café Scientifique, scicaf, scicaf1, scicaf2
, free talks, open talks, free lectures, open lectures ,