Tuesday 17 July 2018 Jonathan Ridley,
Head of Engineering ,Maritime Science and Engineering, Solent
University : Hydrofoil design and use (second talk, previous talk
on yacht design)
An example of one of our one-time students Jason Kerr graduated 1994.
The pinnacle of yacht design is the Admirals Cup and the last one had 3 teams
with some of our graduates working for them. By the third year their doing
3D CAD work designing vessels from scratch , structural design and theory ,
power, systems, aerodynamics, everything.
A hydrofoil we built in 2014 called Solent Whisper and this is where
our interest in small craft hydrofoils (H) took off.
A history of hydroils.
Our hero is Sir George Hayley. A scientist who made astounding
contributions to sci and eng, but we've almost forgotten about him.
He re-invented the wheel, the tension spoke-wheel, with little knowledge
of ship-stability he invented self-righting lifeboats, he invented
tracked vehicles calling them universal railway.
Before internal combustion engines came along he invented
an internal combustion engine that ran on gunpowder, invented
automatic railway crossings, also the seabelt. He started to lok at
aerodynamics . Late 1700s /early 1800s anyone who wanted to
fly simulated birds and flapping wings, fundamentally flawed for humans.
George Hayley observed birds and started to derrive a theory of fligth.
1809/1810 he published his scientific paper on aerial navigation.
His gunpowder engine would have worked better with a gaseous fuel.
It was the same with powered flight , but no engine light enough and
powerful enough, but if we did, then this is what we could do.
He was the first to look at an aerofoil shape, at lift ,drag , thrust
and weight and would have to be balanced together to make it work
. He didn't have a wind tunnel he created a long rotating arm, a foil
on one end and a counterbalance on the other end and a motor to
rotate it. Measuring the rotating force, and the lift, simple experiments
but he learnt a lot. He learnt about the control of foils moving
thru the air and how to control lift. He came up with heavier than
air principles and understood and derived the centr of lift and the centre of
pressure, how forces move and where they move. He looked at
camber effects of different shapes and different lift andhow to control it and in
1848 he had something we'd recognise today as an aircraft, with a set of
wings , tailplane and rudder for control , the world's first gliderr.
He got a local boy to fly it , as fairly light , and launched him off
a hill. 1853 he has the second successful glider , using his
coachman this time to fly it. One longish flight and landed and the coachman
would not repeat it.
The functional knowledge of foils is essential for aircraft and essential for Hs.
All goes quite but Henrico Fullerini ? starts to look at Hs.
In 1911 he achieved 42.5mph on water with just 60hp of engine,
He did a lot of background work and published a lot about H,
from an experimental point of view. 1889 to 1901 John Thornycroft
on the Thames ???. 1905 William Beecham did deep scientific
analysis of Hs. In 1919 Alexander graham Bell had a go, the HD4
hydrofoil with 700hp he achieved 71mph on water. That would
be pretty impressive even today. The fastes object on the planet in 1919
was the Sopwith Dragon aircraft that would do 150mph.
Developent on from then comes from the aircraft industry.
They have sea planes that need to take off from water, water is sticky
stuff and difficult to go fast enough, put some hydrofoils under such
planes and get some lift then can get more air speed. Not particularly
successful as going faster creates more drag , your aircraft
becomes less efficient. In 1954 the next big step in hydrofoils.
Frank and Stella Hemingley , Frank is a naval officer who
survived WW2 and he wanted to set the world water-speed
record. While in USA he marries his wife Stella
, a pair keen on breaking the speed record. They built a hydrofoil they called
the White Hawk . All Hemmingly had was a drawing of what they
thought their hydrofoil should look like. They needed someone with
more technical knowlege and enquired of Imperial College as to
whether they had a student who could do stress calculations for
us and design the vessel. They lent them a Ken Norris , went to their
house in Chelsea , and with no proper plans , had to start from scratch.
It looks a bit like Bluebird , because Ken Norris went on to
design Bluebird. His brother was also an experienced engineer , Lou Norris
who worked for Sir John Chobham, trying to get the world water speed
record in similar time. Ken Norris designed the fatal for Cambell K7 Bluebird , over 300mph . Ken Norris also designed the car the CM7 , stands for
Cambell Morris. He then went on to work on Thrust2 and then ???.
Somehow with his connections ot the Navy , Frank gets hold of a
Whitley turbo-jet, 1943. They go as fast as they can down Lake
Windermere, a few runs , getting faster and faster, without any
timed runs, just to see how well the vessel works. They get up to a
certain speed , then a bit faster and it nose-dives and sinks.
A jet engine running at full speed, suddenly immersed in water ,
is a no-no. They managed to get Rolls Royce to lend them
2 Derwent? jet engines and the technicians to run them.
They return ot the Lake District, see how fast they can go and the
same thing happens, at a certain speed, it nose-dives. The 1950s
its a tandem cockpit and both are in there. Frank did the running and
testing but it was alwys his intention, when they broke the world speed record that
Stella would be driving.
As not a formal record attempt , it was not recorded. A pitot-static tube , measuring
their speed didn't work. Frank reckoned they were doing over 100mph ,
but from the physics and difficulty in judging speed while on water, that
speed was unlikely. They spend winter in Windermere , working on the
boat, waiting for a break in the weather. The break never comes and the
Americans get interested in rreally fast Hs. They go over to the states
and the US Navy gets interested, and the technical points are discussed.
On the technical front Frank and Stella were happy, the US Navy was happy
but contracts had to be arranged. After several years of lawyering
Stella does a few demo runs and the US navy decides
it was outdated and no longer interested.
As a ps to this vessel , Frank brought the vesel back to the UK,
bringing it through Southampton Docks. HM Revenue and Customs
turned up , saying Frank owed them money, and thought it was an import
and impounded it. Nobody knows where that boat ended up ,
it did not leave Southampton. Perhaps in the corner of an old warehouse
somewhere in the docks , it may still lie.
They had 4,000 pounds force of engine , delivering 17.8KN
of thrust . Go back to Alexander bell 700HP and got 71mph.
This time with all that thrust they got 70.6mph. Al lthat developement
and time and just short of the previous record.
Then a few commercial Hs , the Boeing Jet-Foil , they still operate
in Hong Kong harbour etc. Fairly successful, roughly the
same passenger payload asa 737 and much the same cost. Still
operated today . There was a lot of military investigation into Hs
as to performance and speed . They did lots of trials but did not get
very far . One problem for the military is the generatio nof huge
amounts of spray behind them, the spray is very cold , so
not stealthy along with the noise. Also hit any debris in th e water
and Hs are relitively delicate.
They run between Rhodes and Greece and Tukey , on the larger
rivers of East Europe and russia . So why have Hs not become bigger and faster
Datapoints of commercial and successful Hs , horizontally the power to
weight ratio versus the top speed. If not a lot of power or a lot
of power, and you can get to 70mph , and you basically hit a
brick wall. With Hs there is a top-speed that cannot be exceeded.
1950s and 1960s , the Hs themselves were made of aluminium
, ? with steel, moving very quickly thru water . Hit something
submerged and they are easily damaged , then back to the
shipyard for a very expensive refit. The quote is Hs are very nice
until you slice a dolphin in half. It all went quiet
in the early 1970s . Then the sailing world got interested in
the International Moths?, the closest sailing gets to a blood-sport.
They are lethal, the rules for Moths were they have to be within a certain
set of dimensions and since 1970s carbon-fibre just about
comes into the cost-range of people into these sorts of craft.
We start to see there developement, going faster and
faster . Moth sailors realised quickly that if you wanted to really
hurt yourself , then get above the water and you can fall
off at greater speed than ever. From 1974 a lot of
individual trials and experimenting, empiricalwork but not university
reasearch. Going round Portland harbour as fast as possible.
Even today , the Moth class is a pinnacle of sailing in terms of
small craft performance weight and speed and technology.
1980s/1990s,to 2000 the Americas Cup , by 2003 was using
the IOCC 72 ? a very graceful monohull , but from
an audience perspective , 2 of them racing far off, pretty boring.
They developed new rules and looked at multi-hulls, catamarans,
trimarans. The team to beat recently has been Team New Zealand,
the world leaders in this sort of tech. For the 2013 America's
Cup, everyomne agreed on big cats. The Kiwis launched their
boat and the spies were out for the trials. It was sailing but extremely slow.
Second day out and everyone else was really pleased at how slow
Daggerboards. As you are sailing the wind is trying to push you
sideways as well as forwards. So to stop that you have some vertical
board underwater , the daggerboard. the Kiwis found a little
loophole in the rules, it did not say what you could do or not
do with daggerboards, just that they were allowed.
They bent the tips of the dagerboards in , and on day 3 of the
trials in Aukland harbour and the boat shot off into the distance.
Mass panic and a huge amount of plagurism everyone tried to
quickly ament their daggerboards. This was the start of really
big boats going faster. Far better hydrodynamics and far better materials
allowing to build such large structures. Unfortunately they were
inherently dangerous. In one of the races a UK sailor was killed
on one of these vessels. The organisers decided for the next cup
to throttle things back a bit. The next race, still cats going as fast as
possible but a bit slower. The speed record was just over
40mph ,fast enough. For the next cup coming up AC36 in 2021 ,
a whole new type of H , the foils cant or rotate in or out of the water
as required and the vessel can sail along balanced on 2 foils.
Mathematically its possible, it will be interesting to see in practise.
A designer involved with this say they look as though they should
crawl up a beach, lay an egg and return to the sea.
They all work by lifting the vessel bodily out of thw e water.
Another area is where we use Hs not to lift the vessel but to control
the flow of water around a vesel. In 2016 the Vendee Globe race
, non-stop single-handed around the world and 60 day circumnavigation
So a H on each side, they extend out once its at sea. They produce a
lifting force to roll the vessel upright allowing to sail much faster.
Video of Schetana ? designed by one of our graduates who
graduated in 1993 an Open-60? class. Horrible weather conditions, but the H
keeping the vessel upright . How fast its going, the Hs in extended mode .
Doing about 25 kn, in those sort of waves, quite fast enough.
When it was racing in the southern ocean just south of NZ it hit some
debris in hte water , severely damaged hull and limped back into NZ,
all on board safe and well bu tthat was the end of the round-the-world race.
We are starting to see Hs fitted to very low speed vessels. You don't necessarily
want to lift the whole vessel out of the water. But a fairly small foil
just lifting the stern slightly can present a better underwater shape
and can reduce drag. Not a huge saving perhaps 1 to 3% but for a
vessel in almost continous operation , that saving can be
significant. I'm involved with a PhD study of Hs on pilot
boats for the Port of Southampton, burning 1 million litres
of diesel a year.
Another project, our towing tank , a higher speed vessel
with a bolted on 3D printed H , testing the effect.
Assume our interest is to lift a vessel out of the water and to go as fast
as possible. An America's Cup boat , sailing along with sails providing the
driving force forwards, to go faster a nd faster. Working against that is
resistance , water against the hull. Along as our driving force is
bigger than the resistance, we accelerate. As we get bigger, the resistance
gets bigger , and balanced forces then a steady speed. To go quick we
must reduce the resistance as much as possible.
The components of resistance, simplified.
Air resistance pushing against the vessel, complex as its tied in with the
sails , ignore that for the moment.
Viscous resitance - water is sticky stuff . Consider a stationary
vessel , water flowing past it like a stream . Looking at individual
molecules of water , on the surface of the object there is friction
with the surface and the molecules slow down. The next layer of molecules
above that , a bit of friction with the molecules below , but less , getting
less with each layer away from the object. Farther away the molecules can move
faster and faster. This is the boundary layer , Newton tells us that
force = rate of change of momentum and if we are slowing down these molecules
, a change in momentum , must create a force, of drag.
Drag force can be significant.
Viscous resistance depends on water density, the vessel velocity,
the friction coefficient which itself depends on density , the vessel
length and the velocity , the dynamic velocity in water , the shape of the
object or form-factor but most important is its directly proportional
to wetted surface area, the amount of the vessel under water.
If I can halve the wetted surface area , I instantly halve
the frictional resistance. For an Americas Cup vessel , the
wetted surface area . At the maximum draught 40cm of hull
underwater and 30 sq m of wetted surface area. If I halve the
draught that wetted area drops to 15 sq m.
Then wave making resistance. As the vessel moves thru the water ,
it creates waves, it creates pressure distribution around the vessel
under water. But also creates a disturbance at the surface, the
Kelvin wave pattern. Waves from the bow , diagonal waves coming off
and waves coming off the stern . These wave patterns interact with
each other and create drag.
Wave making drag depends on water density , vessel speed , wetted
surface area , bit also on something called the wave making coefficient which
is really complicated. Its difficult to calculate, thats why to solve this problem ,
we stil lbuild model boats and tow them down a tank of water.
Its more accurate and more fun than trying to do the maths for it.
WMR depends on speed and the shape of the underwater volume and
wetted area. We have some equations to allow us to calculate
WMR , the Reynoldsaverage Navier Stokes Equations, non-linear,
partial f=differential equations, to be solved simultaneously.
Imagine something like 25 million separate variables to get this to work
well . If we ganged together the worlds supercomputers and asked
them to solve it for us, we would die of old age before the solution.
A graph of an AC catamaran, with no foils on.
As it goes faster and faste rthe WMR goes up. If I can lift the
vessel out of the water , the amount pushing the water and waves out the
way reduces , the WM coefficeient goes down , WMD goes down
and in theory the vessel can accelerate. Ignoring air resistance.
VR is relatively small, WMR is bigger and if we add them up
we get the total drag. There is a dip in this curve, caused by
waves interacting ewith each other in the WMR at different
speeds. We accelerate our vessel, go faster and faster , until
so much resistance we reach a fixed maximum spped.
For the H version of the AC yacht, stating to generate lift.
At about 15mph , 7m/s the lift htey generate, is sufficient to
unstick the vessel and start to lift. The problem with Hs
is they increase the wetted surface area, counter to what we're trying to
avoid. At slower speed we get an increase in the VD but at about 7m/s
the lift starts to overcome that, less hull in thr water.
As we accelerate furthe rthe VD does not increase too much.
The same with the WMD ,not so much an issue as not so
dependent on wetted surface area, with the stasrt of lift,
reduce the underwater shape and we start to control the drag ,
and add the 2 plots together. When the H craft is up to about
15m/s or 30knots we're generating about half the drag as the
vessel without Hs. Hence instead of limited to 20 knots, we can
do 40 knots.
A bit of H theory.
A theory often banded about is the intelligent fluid theory.
H or aerofoil, does not matter, fluid hits the front of it and the
fluid splits , some around the top, some around the bottom .
We apply a bit of logic to Bernoilli , if we have a shorter path the flow must
be going slower and if you slow a fluid down , Bernouilli says the pressure goes
up . Along the top, a longer path , so the fluid must go faster , then via carburetor or
Bennoilli , the presure must drop. so high pressure under
and lower pressure under, must push my foil up. This is the classic
theory , taught as an explanation of how a foil works.
The reality is different. What goes further must travel faster , but there
is nothing to say it must travel faster. Everyone says the top molecule must go
faster than the bottom one, so they can meet up again at the same point
behind the foil. much research but no one has conclusively proven that
water molecules mate for life. With a big foil in a small wind tunnel
, because of blockage effects, this van happen. In reality it doesn't work.
So start with a flat plate and at slow speed , flat plates make
remarkably good aerofoils , the paper aircraft scenario.
Take the flow in from the left, hit our flat plate , inclined at
some angle of attack , so it will try and create some lift. We will assume
there is no viscosity with our fluid , the molecules flow perfectly
over one another, an ideal fluid.
Plotted here are streamlines , like isobars on a weather chart.
they tell us the direction of flow and hte pressure and speed.
The closer our streamlines are together , the faster the fluid flows,
like isobars and wind on a weather chart. For out foil , the top
fluid flows around the top , bottom fluid around the bottom and
inbeteeen ther eis a point where our molecules hit the foil
and have to decide whether to go up or down. They sit at that point,
the stagnation point. At the trailling edge , again a stagnation point .
There is a symmetry here in the shape of the streamlines , mirrored
on the centre-plane . The symmetry tells us that if we do the maths of this
and try to calculate the lift and drag , that in terms of a vertical
force , the force lifting the foil up and the force pushing the foil
down are identical , cancelling out and no lift and no drag.
Referred to as the DeLambert ? paradox.
now put some viscosity into our fluid, a real fluid and look at the
trailling edge . The water comes down around the trailling edge
and tries to go around the corner and go back to meet the
stagnation streamline. If no viscosity then that would be fine.
The viscosity tells us we have a different scenario . Viscous fluid does
not like going round corners. It tries to run around the bottom of the
corner , but it runs out of energy due to the viscosity .
So instead it rolls around , and disappears up itself and rolls
downstream , the starting? vortex. As soon as we move a foil from static,
a starting vortex appears at the trailing edge. That starting vortex
disappears downstream. For a good demo of this, when you have abath
, put some talcum powder on the water . Get a credit card , carefully place
i nthe watr , just 5 or 10 degrees angle of attack and move very slowly.
You will see the starting vortex. Same with canoe paddles.
Vortices are incredibly powerful , such as tornados. This vortex is very
small but very powerful. It acts like a gear wheel. It starts to pull the
rest of the fluid , round in the opposite rotation , behind it, circulation.
Our starting vortex is going round very quickly , very small diameter ,
is creating a much bigger circulation of flow around the back of the
foil. Again take your credit card and move it thru the water and you carefully
lift it out, you will see a little vortex of the starting vortex , going
downstream and you'll also see the talcum powder rotating round showing
the circulation pattern. A simple experiment you can do at home.
Our vortex runs off down stream , the circulation stays that hits the
back of the foil and gives it a bit of impetus to change and it
pulls the stagnation streamline right back to the ?. When it
does that , we've lost the Delamber paradox, lost the symmetry
, we're starting to get accelerated flow along the top and higher
pressure along the bottom and starting to get the lift force.
We can now put a bit of camber or curvature into this foil
to control the lift, we can put some thickness in the foil
for strenght so it doesn;t snap off.
There is a calculation for a flat plate with a small angle of attack
, streamlines coloured by velocity . But it gets a bit more complicated.
This is a 2D foil with no end to it, infinitely long. With 3D foil
we get problems . Look at such a foil from behind and down
on top. At the bottom of the foil is high presure, top of the
foil is low pressure , the high pressure rolls around the tips
of the foil in to the low pressure and rotates dowwn stream
as a pair of tip vortices. The plan form of that foil really
contols the tip vortices , and they themselves create extra drag, induced drag.
When R J Mitchell was designing the Spitfire there was a piece of ironically
German theory called the Bettz ? Minimum Energy Hypothesis, that said if
you want to minimise tip drag , then you need a lift distributiuon
across the foil that is elliptical. So you need a foil with an eliptical
shape , hence the Spitfire wing , tryin gto control the tip vortices.
A H similar to what we used on the Solent Whisper. If I want to control
the tip vortices, control the drag, to go fast and lift the foil and
vessel out of the water, I need to look at the plan form.
How long the foil is compared to fore/aft dimension. Span and
chord , dividing and that is the aspect ratio. A low aspect
ratio is short and fat, a high aspect ratio is like a glider wing long and
thin. More equations but basically our lift coefficient , measures the
lift of a foil depends on the inverse of the inverse of the aspect ratio.
The bigger the aspect ratio , the longer and thinner the foil,
the more lift you can get. The induced drag coefficient is invesely
proportional to the aspect ratio so the bigger the aspect ratio, th esmaller
the drag. Al lgood news, if we want lots of lift , not alot of drag
a long/thin foil . So a fast jet at one end of the graph and glider
at the other end. With Hs there is no single neat mathematical
solution. With a short/fat foil , then the ends of the foil don't
bend up much , don't deform , no stress within the foil
and easy to build . We have low ? of lift , quite a lot of
drag and effectively small pressure changes around the foil.
But fo r a glider type aspect ratio , then very high tip
deflection . If I double the length of the foil , keep the same ;oading ,
the same shape, the long foil tips will move up and down 16 times
more than smaller foil. That creates structural damage.
But we also get large pressure changes , that gives us lots of
lift. This is where going back to WhiteHawk , get to a certain
speed , about 70mph , we hit abrick wall.
As we go faster and faster , we start to affect the water around us.
A substance having phases, a solid,liquid and a gas phase. For wate r,
not in the arctic is liquid. If we play around with the temperature
or pressur ewe can convert it to gas. Boling a kettle increases the temp
and we get water vapour. If we combine that with pressure then some
different effects. Boil a kettle on Everest it will boil at around 70 deg.
Hs creating lift as they move along, on the top surface, the pressure
drops low enough , that we can turn seawater into wate rvapour.
A simple lab experiment of a flask of water with a partial
vacuum above it, not dissimilar to Donald Trump. Create a complete
vacuum and the water boils at room temp.
In slow motion , we can see the inception point is not from heat at the
bottom , but nucleating about dust in the fluid body.
We effectively build a bubble around our H , if too fast. A water H
works far better in water that in a bubble of gas. As we get to the
magic 70mph , we start to get cavitation.
We can control that to some extent by the shape of the foil.
A typical shape for aircraft wing and a typical shape for a H.
For the wing , plotting the air pressure on the top of the wing ,
we'd find the peak of the low pressure , when flying straight and level
is close t the nose of the foil . A big peak , then drops off,
the pressure recovery, going back to the foil trailing edge.
Hs , this one an ekla-H ? ar eparticularly designed so ther eis not the
big pressure peak at the front , a nice gentle increase and then amuch
flatter line dropping off towards the back.
So we have similar areas of lift but the peak pressure is lower, the point where
cavitation starts , is much lower. Its referred to in hydrodynamic as a rooftop
section. Then we get another problem. Underwater video
from our towing tank. A yacht stationary and then accelerating,
a keel at an angle designed to create lift a sa partial H, and a certain
point it just taps the surface of the water. There is low pressure on the top of
our foil , not going fast enough for cavitation , but that low pressure
on the foil is attractiv eto the low pressure of the open air above it.
The low pressure sucks in air from above , pressure drawdown,
ventillation . The carbon fibre strut is bouncing around due to the
amount of forces. If our foil is too close to the surface , we get
ventillation , we end up in a big bubble and we loose lift.
Solent Whisper being tested , you can see a tip vortex appear
when a bit too close to the surface, water vapour starting to form
, the white line is the water vapour under water.
The foil gets too close to the surface , ventillate and the
whole foil become covered in a white cloud , loss of lift and a
nose-dive. Hobby-horsding along the surface, nose-dive recover ,
This is what happened to White Hawk, going so fast
cavitation starts, it lifts too close to the surface , looses lift
and drops down and with a turbine beehind you while doing 70mph
, pretty hairy.
So getting H design right to control lif and control cavitsation
is really tricky. We need to control the foil. We don't want it too
close to the surface , sufficiently underwater so it won't ventillate ,
so some sort ofcontrol required. Gravity trying to pull our vessel
down . In the wate rnormally and not foiling , displacement mode,
we have bouyancy force Archimedes. As it accelerates and lifts
out of the water our bouyancy force disappears , gravity is still there ,
a little bouyancy from the foils and w ehave lift.
As we go faster and faster , creating more and more lift and the vessel
wants to lift out of the water and we'd get to the point of the foils ventilate
or cavitate. So for our foils we need lots of foil to lift us
and then suppress it so we can just balance with equilibrium of our
vertical forces and sail at constant height over the water surface.
2 ways of doing this. 1 is the ladder foil , its what Alexander graham
Bell used. Ken Norris described them as Christmas trees under water.
At slow speeds and low in the water, all the foils are submerged and
creating lift. As you accelerate, the top foils come out of the water
and air is 1/1000th the density of water so the lift of the top foil
drops by a factor of 1000, ie no lift. So down to 3 foils , then
2 foils and you try and tune the foils to match the weight o fthe vessel.
Tricky for Mr Bell as , going along and burning fuel , he was getting lighter,
so easier to do with sail as the power. This is the simplest and mos t
straightfrward but carrying a lot of dead weight fo rthis and low speed
drag. So , for commercial vessels , go for a V form foils.
Therre are some additional benefits to this, but mainly as the
foil creates lift , more of it comes out of the water and less lift
from the parts of the foil left in the watwer .
Problem there is oscillation , from ventillation near the surface being drawn down
and we need a bigger foil to offset the loss from ventillation. A vicious circle.
They work to a certain extent but not hugely efficient.
The modern , as used by AC vessels is to use L foils or T foils.
On these we can change the angle of attack. The AC yachts will
cant the entire foil , forward, level or aft for positive , neutral
or negative attack angle, so they can trim for best balance.
It is a person doing this, computers are not allowed, a human
trimmer has to fly the vesel , playing the angle of attack
and get the vessel to run at at a particular height above the water.
This is difficult and the early runs required the knowledge and skill
of pilots to teach the relevant skill to these human trimmers.
For Whisper , a simple mechanical solution , like an aircraft,
we build a flap on the trailing edge of our foil . Its difficult to
do in terms of structure becuae the flaps ar esmall out of carbon-fibre
and the hinges out of kevlar. Behind the foil , called a wand,
which has a float on the end and bounces on the surface of the water.
We can tune the wand to the foil, so if our boat is too low in the
water , the wand is pushed up from the water surface. That works a mechanism
thru the board , drops the flap down that gives lift . At the perfect height
we make sure our mechanism is such that the flap is level
and no aditional lift . If we go too high out of the water , the wand drops down
, th eflap ? a tthe back , we dump some lift and the vessel drops back
to the original height. A simple mechanical solution but works well.
In Whisper we could change the gearing in the system to tune for
different ride height ,from 20cm to 47cm above the water.
Unfortunately though simple mechanically, it is expenside to
manufacture. In terms of materials for building Whisper
, the cost is about 20,000 GBP but of that , the one foil is about
5,000 GBP. If you hit the bottom while out sailing then quite an
insurance claim . At the tip , the end of the foil is bent
down. This is a sharklet? an attempt to try ad control the tip
vortices , by changing the lift distribution just a t the very end.
Go too thin and it will just break off.
One main foil that supports the vessel but if you have only
one foil in the middle , you will tip forward or backward
so need t support it at more than one point , sovthe rudders
have another foil and we can change the angle of attack on those to tune.
The design and materials of Hs is such that they must be strong despite
a thin foil and not to flex is tricky. A real challenge in designing Hs
and getting them to work properly.
Thanks for choosing me rather than going to Tim Peake's astronaut talk ,also on this evening, in Winchester
? ? multiple foils , problem with tuning.?
Its difficult to get it to balance up. Requires a certain
amount of lift from the front foil , certain amount from
the back foil and need to balance the two. The control we used was
like a motorbike, a twist grip for the rudder and change the rear foil
and angle of attack to balance up. The fastest that boat got to was over 30
With the tendency towards climate change, there is big urge to reduce the
power consumed by large ships, is there a large ship application for Hs?
This is another area where Hs have an upper limit.
an upper limit of about 600 tons. As you scale your vessel up , the
weight of the vessel goes up with your scale cubed. Double your weigth and
2x2x2 . The lift generated from the Hs as you scale them up , scales by
the surface area, so squaring. So scale up and not enough lift is
possible. The power for a motorised H is quite large and in
terms of passenger carrying capability , dependent on the
deck space , you can get far more seats on a wide catamaran
than a relatively narrow H vessel. The efficency commercial driver for
passenger transit is to go for catamaran rather than H.
For smaller vessels, use of Hs to reduce power, in controlling th e
flow around the vessel . For small commercial vessels, we still need to
get thedrag down more, then that would be possible.
Last week was the announcement of the world's first diesel-electric hybris pilot
vessel , the next developement to that will include Hs for efficiency purposes.
If and when you hit something , firstly do they have shear-pins
and secondly I went to IoW yesterday on a Red Jet. It was spring tides
and low tide. I'm aware that spring tides lifts all sorts of nasty
stuff that otherwise sits beached on the land. I have seen a big bit
of ex-quayside baulk of timber weighted down by large ironwork
so neutrally boyant and only just piercing the surface.
Yesterday the vessel turned at Town Quay and started to
pick up speed and there was a great clunk , you could feel through the hull
and seat. He slowed down to a full stop and he did not go backwards and
nothing came over the public address. I was trying to imagine what was going on. Perhaps he hit something on the bottom , as low tide, would he have
had underwater cameras to see if he'd snagged a chain or a something
and could see if it dropped off if he reversed?
Undoubtedly what happened was something got sucked into the water intake
, the depth there at low tide is 12m so plenty of water under them.
The prime candidate would be a plastic bag , sucked into the intake,
that would shake the vessel . This would have been cavitation
, changing the flow into the water-jet unit , causing the impeller
to cavitate , which is very violent , and the whole vessel shakes.
Cavitate for too long and its perfectly possible to eat holes in the
blades of the impellor. A fairly common event, due to the huge amount of
debris in the water , natural and otherwise. Neutrally bouyant
debris , typically carrier bags are the prime candidate.
The shear-pin business, designed in for worst case ?
You would design a fail-safe scenario . For racing vessels and high speed
sailing vessels , you can make the risk as low as reasonably practical
but can't negate the risk. The fisrst Whisper prototype ,
was sold to people who accidently hit Sweden with it. That removed the
foils and did a fair amount of damage . You need it to fail at a certain point,
but not rip the hull to bits as well. Rip off the foil and that disappear cleanly away rather than risk hull integrity. Red Funnel Shearwater Hs
when they wer running pure Hs did loose a front foil once ,
the hull nose-dived and came up again . Scared everyone but no
Rather than the catamarans which
had foils to help the drive control , Red Funnel about 20 years ago had
full Hs that would lift the vessel completely out of the water.
They were built in Italy, still actually operating in Ireland.
Similar to hitting debris, once you are up on the foils and moving
quickly , if there is a lot of other traffic around you ,
navigation becomes difficult. The current red-jets if they want to
stop quickly , they can just drop the buckets at the back of the
water jets and it will settle fairly level , fairly quickly in a few
boat lengths. With full Hs you have more distance to carry
and in crowded waters that is risky.
I was thinking there may be some sort of gyroscope action and that a
H craft could not turn on a sixpense.?
It depends how much you want to scare the passengers , a relatively
good turning circle but it will bank very steeply.
That asymmetric system for the 2021 series. I would have thought the
forces on a simple blade keel were bad enough , but an off-centre
H arrangement , looks like pretty horrendous forces involved?
The whole thing just does not look right .
For that assymetric structure I can't imagine what the internal bracing
must be like?
Its probaly milled titanium with a carbon skin around for the hull.
They will be interesting , there is a move afoot I believe,
that says lets work together on some of these really complicated
components and then we'll see who is the fastest when it
comes to driving them.
Have you pics of those Americas Cup 21 series sailing?
No generic name for them , not assymetric foil or anything other than
AC 21. Some next generation H sailing boats.
They look terrific fun but I'd not go anwhere near them personally.
Flat out in the Southern Ocean they'd be doing 30 knots.
Is there a sweet spot for the depth below the surface for avoidance of
bumping or whatever?
Debris is at all heights in the water column , but if your H is too
deep under water , there is a lot of supporting strut to
hold it , additional wetted surface area and extrra drag. If too
close then ventillate or cavitate and so extra drag. So a
lot of time spent trying to find the sweet-spot , 1m under water,
2m under water . All quite difficult to test at model scales,
it can become an expensive process. For Whisper it was cheaper to
build the full-scale boat and suck it and see.
There are no servo-systems built into that control system?
The loads going thru the foil are so great , to create an
electo-mechanical system to try and move the H up and down
is probably something possible in theory only. The weight of the
kit would probably negate any benefit.
Tuesday 21 August 2018 Prof Mark Cragg, Soton Uni - Antibody
immunotherapy: overcoming cancer by engaging the immune system
Antibody immunotherapy (AB I) and how we use it for anticancer treatments.
We're based at the SGH, just moved to the new site of cancer I.
Some general introduction to ABs, monoclonal (MC) ABs which is what we
use in the clinic, ABs that are now treating patients successfully,
fovusing on 1 AB I've spent 20years working on Rituxamab (R)
and how it works. A paradigm on how lots of different types of ABs
work. The target of that AB is a molecule called C20, discussing other
C20 MC ABs and some of the other ABs now being used in the clinic.
An AB is a Y shaped molecule , it is dimeric 2 identical
domains to the sides with a line of symmetry. The bits , the variable
domains , the bits that do binding. ABs are essentially
recognotion molecules, the bit at the bottom is the Fc domain which
does the interaction with the I system. A ctrystal structure of what
an AB looks like in 3D at the atomic level. Then ribbon colouring to
show where different parts of that AB are. The FAB domains are
identical on eithe rside, the Fc domain at the bottom .
The bits inj the middle are sugrs, carbohydrate part of that
molecule that helps keep this in the right orientation and structure.
The important parts of the molecule , as d=far as AB function
is to do with recognition. The loops ar ehypervariable loops
and they are different between different ABs. One area is
particularly hypervariable , each having a unique sequence
particularly in that one region . Within your body , the diversity
of ABs is enormous. So w ecan recogniose billions of
different types of molecules , because the sequences in thwese
regions are different. When I say they combine to specific
targets, they try to recognise specific things. Its all part of
what the IS does, differentiate between self and non -self.
We are trying to recognise the difference between cancerous cells
an d normal cells. ABs have the problem that many molecules llook
similar or are quite different and we need to be able to
distinguish between them. ABs will bind with one very specific molecule
, and ignore all other molecules. This is important when distinguishing between self
and non-self, pathogens ,bacteria,viruses all those , compared to
normal human body cells. One of the primary things ABs
are involved in is fighting infection. Al lthe time lots
of circulating ABs in your blood. They are recognising those different
molecules, particularly molecules on viruses and bacteria.
The way they are generated in the body -we have 2 phases.
A primary response , when first exposed t ta pathogen ,
we generate ABs. It sees the pathogen snd removes it from the
body. The beauty of the IS is that it has the capacity
of memory. If you encounter the same pathogen agsain,
you are immunised, having had memory of encountering thst
same pathogen . when encountered the second time you get a much bigger response.
And a much more rapid response. The magnitude of response the second
time is 100 times bigger. So we can fight off infection.
Instead of feeling ill before the system gets going. This way the repsonse
gets going before we can feel ill.
Wha tpeople have been trying to do for the last 100 years is to
understand their utility possibility for treating different
types of diseases. Particularly for anti-C treatments.
Paul Erlick had the idea of magic bullets , postulating that
ABs existed before anyone had any physical evidence for that.
Way before we had structures , before we could clone things
or any of the moder=n biochemical tricks we have these days.
He postulated the body could recognise things .
Then nothing for 60 years. Millstein and Khola in the 1970s
found a way for ABs to be generated in the lab.
They demonstrated ABs and could isolste them and grow them in the lab.
In the lab they made MC Abs. The different ABs are recognising
different parts of a virus, hundreds of things on the cell
surface differnt from humans , generate aBs to each of those
different biits . We can generate different ABs to diferent parts of a
specific molecule . So lots of ABs involved , of different
specificity, a polyclonal response. Lots of different ABs mounting
against one particular pathogen. Khola and Millstein
generated MC ABs by taking a single immune cell, a B cell or a
plasma cell, a normal cell for making ABs and they physically
fused it with a chemical PEG so 2 cells fusing and the cell
they fused it to was an immortal cell , a myoloma cell.
A cell capable of producing lot of protein. So a single B cell
which can only make a single specificity of AB with something that
was immortal and would contine to produce that particular AB.
They won the Nobel Prize for this. 30 years ago, now we use them for a
lot of detection kits, pregnancy detecting stem cells, cancer cells in the
blood , and for diseases the early detection of cardio-vascular
disease, deep-vein thrombosis. This is the specificity of ABs , of great
use for detecting, binding to something uniquely. A massive repertoire
of their bility to recognise something. But these days we are starting to
use them to treat diseases. To treat neurological disorders,
auto-immune diseases , allergies and treating C.
We are trying to recognise something different about a C cell.
So an aberrant protein , something expressed differently on a
tumour cell , not expressed on the normal cell.
Then get an AB to recognise it and then in some way interct with the
IS to destroy it. So why do we need new therapies, whats wrong with
things like chemotherapy.
A plot representing evolution of chemotherapy over a few decades,
for treating lymphoma, B-cell cancers, blood cell cancers.
Each line is a ore intense/more agressive chemotherpy , so
capacity to kill cells with increasing intensity . But this survival curve
shows it makes no difference, chemo can only take you so far.
For the patients surviving out to 5 years , chemo has done a good
job , but all the others have not survived. Getting more intense chemo
will not work. So the use of MC ABs to use the IS to fight cancer or
disease. So the AB using the variable domains will find something
specific on a tumour cell, and engage the IS. The IS has multiple
ways it can delete a cell thats been tagged by an AB. A protein component
in the blood C1q , a member of the complement system , of a
protlytic cascade and can also interact with receptors on immune cells.
The reptors that find that in the Fc are called Fc receptors.
Fc means it was fragment crystaliseable , one of the first parts
they could crystalise and so getting the atomic structure , very early on.
It was particularly straightforward to do that.
So we'll bind an AB to a C cell and get the IS to delete it.
Since 1997 this route has worked , for effective therapies.
A curve since the first years of getting ABs thru into the
clinic, approved therapies. So in 1997 there was 6 .
The original ABs were mouse ABs , mous eB cells , the mice had been
immunised with a human protein , the mice made an AB to that
human protein, then the cells were immortalised to make the MC AB.
The first ones to go into humans were also mouse ABs. Putting a mouse
AB in a human will probably generate an immune response
and get rid of the AB. This happened , so subsequent generations of ABs
became firstly chimeric the Fc part of the AB that was mouse
was then changed by genetic engineering to become human. So
much less of a problem as less of the AB was derrived from mouse.
More recently as we've got more sophisticated with microbiology
tech, we've either humanised or generated fully human ABs.
Humanised is where we took the mouse sequence , and look at the
human sequence noting which bits ar edifferent between mouse and human
and convert them with molecular biology. For human we generate
them originally form human B cells using cloning techniques
or we do a phase display library , where we take all the possible B
regions and do some selection in-vitro , not in an organism.
The original ABs were generated by immunisation via mice , now we can take
human B cells , isolate all the different AB molecules and then a panning
technology to identify things that bind to the things we are interseted in.
So fully human ABs. Since 1997 essentially an exponential increase
in the amount of aBs that gain approval. I looked at the table
yesterday and we are at 70 or 80 ABs approved. Plus 100s in phase
1 trials , on the route to come through, phase 2 and phase 3 .
This will continuie for at least the next decade, with all the ones
just starting thru at the moment.
Canonical means just normal ABs in terms of their structure
and function, just the same as the wild type ABs , normal ABs .
Non-canonical where we're a bit cleverer and identified a particuar
function of an AB is either useful or not useful and then
augmented or removed it.
C1q is one way the ABs can target a cell, the C1q binds , then a
cascade that enables various things that happen with a complement
cascade. One is we get immune activations , we get release of
anaphalatoxins where you get redness and swelling inflammation.
C3a, 4a/5a they bring in the immune cells to the point where these
things are released. You can get coating of part of this cascade
C3b which targets the cells where this activity is stimulated.
This gets coated on the cell surface and in there ar evarious cells
that have receptor components C3b which again allows a
recognition and destruction of a target cell. Then a membrane
attack complex , a multi-protein complex forming which
physically punches holes in a plasma membrane of a target
cell. The Complement Cascade
A second thing we can take advantage of , all these receptors
especially when talking of targetting C cells, are there for a reason.
There not there so we can conveniently tag them with ABs , they are
doingg something generally. So if a tumour cell has upregulated a
protein , its probably there for a reason. THat means when we then bind it with ann AB , you potentially perturb the signalling that comes from that
particula molcule. Then lastly the ineraction between the Fc and the
Fc receptors on various immune cells and particularly importantly
are cell types such as macrophages and natural killer cells.
Anyone studying biochemistry has to learn about complement
cascades, 20 protein complexes with lots of things leading to lots of
other things. It starts with the C1q protein , that does the
recognition , it recognizes when Fc parts of the Abs are close
together, so you need a certain orientation of these Fcs to C1q , to
bind. That starts the cascade, conformational change happens when
they interact and they start the protolytic cascade of cleaving
, C4, C2 comes in and cleaves to form a C3convertase, essentially a
whole process of proylytic cleavage , releasing the next fragment that
goes on to start the next cascade. It ends up with the MAC which has
polymeric amounts of the last component called C9 , one molecule of
C8 , C5b the coating on the cell surface. Its essentially punching holes
in the plasma membrane, which then allows those cells to be destroyed.
In terms of signalling , ABs can do various things , they can physically
transmit a signal thru the receptor that causes growth inhibition
or death of that cell. It seems counter-intuitive as to what a tumour
cell would want to do, by upregulating on its cell surface, but unless there
is any selective pressure, for that receptor to be detrimental ,
then why would it down regulate it. If it just happens to be a particular
cell that evolved and has a particular receptor , which can target
for destruction , we can take advantage of that with our AB.
Thts certainly the case with anti-ideotype AB. Also ABs can block
a positive signal coming from a receptor cell surface. If a tumour
has upregulated a protein , to help it grow/proliferate/survive
, then using an AB , we can block that signal. Either blocks the receptor
interacting with other receptors on the tumour cell surface or it
can block interaction between say a growth factor and a growth factor
receptor. This is what happens wiht hte drug Herceptin , which is an
aB that binds to a receptor called Her2nu, which is over-expressed
on a proportion of breast-C patients. One of the successful
treatments for secondary breast-C after initial treatment.
ABs can also block host / tumour-cell interactions, a particular thing
of interest when we consider mestastasis. When a C metastasies
to a different site , it then has to generate its own vasculature.
It has to get blood vessels, get nutrients into it and it upregulates
the protein VegF and we can then generate ABs that block VegF
, so we can stop that process, or happening less efficiently.
This is hte action of the drug Avastin . These are AB drugs , that you
may not have known. THe bit I'm interested in , fo rthe last 10
years. Fc receptors tht engage cells of the IS not just the
cascade complement. An interesting but complicated family of
receptors , parallel systems activating in mouse and in humans.
Humans have to be more complicated than mice , but the principle is
straightforward. We have Fc receptors that are either activating
or inhibitory. Activating ones stimulate cells of the IS and
inhibitory ones inhibit cells in the IS. They do that with
different signalling molecules on the inside of the cells.
The only real difference between the mous eand the human system is tha t
we have more members in the human family . This happened
during evolution , duplication of the whole genome locus
and we got twice as many. They are highly conserved when you look
at their sequence and structures.
For understanding how these work, I'll go thru some
gene-KO studies we've done in mice. One is the gamma-chain Knock Out.
The gamma chain is associated with all the mouse activatory receptors,
meaning that if we have a gamma-chainKO, none of these receptors are
expressed or signalled. We can they say , are these gamma-chain receptors
important for the function that we want to study. We still have the
inhibitory receptor left , because that doesn't need the gamma-chain
for signalling or expression.
What happens when you engage an Fc receptor. On natural killer cells,
they only express an activatory Fc receptor called 3A or CD16?
, just one of them. When it finds an AB thats been tagged with its Fc domain
you get a big signalling cascade and activation of that natural killer cell.
The NKC can bind to a target cell and kills it, by releasing
cytolytic granules ,in close proximity to the target cell.
That process is called AB dependent cellular cytotoxicity, one way that
Fc receptors activate a NKC to kill a target.
B-cells are the opposite of NK cells , only expressing an inhibitory
receptor , no expression of activatory receptors. The purpose of that in
normal biology is it prevents excessive B-cell proliferation.
When B cells recognise something and go on to make ABs , if
we've already generated enough ABs , we need a receptor to stop that
process . THis process is regulated by the inhibitory Fc receptor.
Then a load of other cells in the IS that express both activatory
and inhibitory. Cells called dendritic cells , monocytes,
macrophages , different cells with discrete functions in the IS.
Expressing both activatory and inhibitory, like a rheostat model.
The more activatory you've got, the more likely you will activate that
cell type or more inhibitory that cell expresses, then the more likely
to inhibit that particular immune cell.
We can change the expression of the activatory and the inhibitory
Fc on these different cell typpes by giving them different types
of stimulation. Complement factors C3A, C5, the anaphalotoxins
that are released when complements get activated, can also
activate cells so they have more activatory Fc receptors.
Cytokines, ?receptors things about inflammation
, recognition of foreign pathogens , all upregulate these activatory
Fc receptors. A correlation with genetics , certain polymorphisms
, different mutations seen in different individuals which correlate with
more or less function , more or less expression in
Fc receptors. Perhaps 50% of the audience have one polymorphism
for the CD16, high affinity allele so you bind ABs better than the
other half of the audience. One of you here probably has a
particular polymorphism for one of the inhibitory receptor
functions. We have 2 alleles of each of each of those Fc receptors.
Given we have 6 Fc receptors , and multiple polymorphisms,
the genetics can get quickly complicated. But it certainly has an
impact. The last thing is AB isotype. ABs are formed of
immunoglobulins , 4 different classes of aB. When we generate
the initial immune response to a pathogen , we don't always
generate the same type of AB. Could be IGB1,2, 3 or 4 and they're there
for different reasons. If we are trying to recognise a carbohydrate
mo;ecule, then we tend to generate more of an IGB3 type response.
Trying to generate a very potent response , a good way of
deleting things , we geneate IGB1 . This is generally the isotype
we use to treat C cells. We try to use the most effective isotype
of AB to interact with Fc receptors.
For tumouur destruction we want more activatory Fc signalling
and less inhibitory Fc signalling .
So the ABs in the clinic. Now 31 as of checking yesterday
approved ABs for C. 28 approved, 3 are pending .
Ritaxomab, Aflatumomab , all with mab at the end as monoclonal
ABs. Ibutamib ? , tocitumamab ? are all directed to the same target .
HER2 , EGFR, VegF the Avastin targetting molecule .
All the trailblazing was around the target CD20 .
4 ABs initially approved . CD is for a Cluster of Differentiation
giving it a designation so they could be compared with
other people's ABs around the world. To know that one AB
bound to the same target. So clustering all the ABs together if they bound
to the same thing, then named that thing CD20. CD20 is the 20th one that
got a name. In 1980 we were working on murine ABs, purely generated
mice. 1F5 was about the first AB that ever went into a human.
Subsequently another AB abrutonib, tioxitan? followed by ritotomab
which is a chimeric AB having a human Fc domain rather than mouse Fc.
It was approved in 1997 and since then a whole host of other ABs.
As they progressed into the clinic, so has the technology . So from
mouse to chimera to humanised , to human , the non-canonical AB.
THey have modified Fc functions, doing things differenty to normal
ABs . R was the first monoclonal approved for the treatment of C.
It has had the singlemost action releating to patient responses , in the
30 years since licensed for lymphoma. R binds CD20 which is expressed on
lymphoma cells , its expressed on leukeamia cells , white blood cells.
Its a bit like the chemotherapy slide , that doesn't always work.
Chemo1, more advanced chemo2 and chemo3, the mortality
from lymphoma, going up over time. Thats no tto say these drugs are not
effective, just that the incidence of lymphoma is goiing up and we're
not curing people with these chemotherapies. THen R licensed
in 1997 and the mortality starts to come down . Even though the
incidence is still rising. This is using R and chemo, R only works in a
transient fashion on its own , dependent on the type of lymphoma.
But they are used entirely in combination.
THese days, treating not just lymphoma but also a number of
auto-immune disorders, like rheumatoid arthritis , SLE , MS.
It deletes normal , auto-immune and malignant B-cells,
the whitr blood cells that generate ABs in the fisrst place.
We're using ABs to delete them because they cause different types
of disease. In the lab we tried to understand which of these
maechanisms was really important for R to work.
If we understand that , we could decide which mechanisms , to
enhance, to make it even better. To do that we took advantage of
some mouse models . So a mouse that expresses the human
CD20 gene , so its transgenic expressing human CD20
meaning we can use anti-human CD20 ABs . We do
adoptive transfer experiments. Taking cells from
such a mouse that expresses the human trans-gene, label them
with flourescent dye , we mix 1:1 ratio with normal B-cells
from a mouse , that don't express the trans-gene . We put into a
recipient mouse , then we allow those cells to traffic the normal
organs of the body , the spleen predominately . We give a therapeutic
AB one day later and read out how well the therapuetic aB deleted
the cells we put in. A complicated assay but straightforward to read.
Using flow-cytometry , labelling up your cells to identify B-cells.
Identifying the cells with low levels of the dye or high levels of the
dye. A log-scale plot of CD transgenic B-cells, control cells.
Expressed as a histogram there is the 1:1 ratio. What we look for is
whether an AB can then delete transgenic B-cells, which it does.
So we showed the AB deletes the target cells. The assay is semi-quantative
, ratioing of peaks and we can see what happens putting in a
control AB , we don't get any deletion . But put in R AB , we
get 80% deletion. We can change the recipient mouse and assess
which effector function is important. C1q the important
protein from the complement cascade, has the potential for doing
Membrane Attack Complex ? cells . But if we place in cells ofa
mouse that lacks this protein, or the next one along the
cascade , it makes no difference. The ABs are perfectly able to
delete those target cells. Complement is not the really important
mechanism. The ability of an AB to transmit a signal that might cause
a cell to die. Protein VCL2 ? is an antiapoplotic protein , it blocks
death, it stops cells dying thru the signalling process. Again the ABs
can delete them . If we did the same experiment and gave those cells
chemotherapy , these cells would be protected from chemo,
because thats how chemo works , it causes apotosis , the death of the
target cells. But ABs and chemo work in completely different ways.
Chemo would be made resistant by this particular gene , but the AB
doesn't care, its able to delete it , as if it wasn't resistant.
The take-home message is, if we go back to the gamma-chain knock-out
mouse, which doesn't have any activatory Fc receptors , we get no deletion.
This is exactly the same as if we didn't treat the mouse. If we give the
mice chlozinated liposomes? which are taken up by macrophages,
the phages are phagocytes, they eat cells and eat things clearing up the
body of debris. If we get rid of the macrophages , with chlozonate,
again we get no deletion . If we chop off the Fc part of
our AB and make a FAB2 fragment , done enzymatically, again no
deletion. The same is true with complement killing .
B-cells express the inhibitory Fc receptor , the only Fc receptor they
express. THe more they express, the more quickly they internalise
the AB, if you have lots of the inhibitory receptor at the cell surface.
The FAB domain doing the binding to CD20 , the Fc sticking at the
back, and the Fc can then bind the Fc receptor on the same cell
surface , and that causes an internalisation process. It gets dragged
into an organelle caled a lisosome , which degrades it, like a recycling
bin. Images via confocal microscopy, labelling each component with
a different colour. The R labelled green , inhibitory Fc in blue,
and they are coincident. Looking at another cell , the AB, the Fc receptor,
and the organelle the lissosome , all trafficked there together.
In the lisosome it gets chopped up and degraded. The AB is binding at the
cell surface , Fc binding to the Fc receptor , internalised and degraded.
We postulated that for lymphoma , that could be a possible resistance
mechanism, the internalisation and removeal from the surface.
To check for that, we go to clinical data. 2 different clinical
trials, 2 lymphomas that are treated with R. We get the diagnostic blocks from
those patients , stain for the inhibitory receptor, we quickly see a difference
between different individuals and lymphoma. Either completely
negative or incredibly strongly stained and then if you stratify your
patients according to that , the patientrs who don't have lots of inhibitory
receptor on the surface, do much better in survival , than those
who express high levels. Mandelson Lymphoma in the context of chemo with AB . The other ? lymphoma , is given as monotherapy , the only
incidence where we give R on its own without chemo.
Where we don't have the inhibitory receptor, we do better than
where we express medium or high level .
THe AB binds to the CD20 molecule and if the cell did not express the
inhibitory receptor , the Fc gets bound by the macrtophagre, gets deleted
and all is happy, getting rid of the C cell.
If the cel l expresses inhibitory receptor anfd allow the internalisation
process to happen , then it gets trafficked into the lisosome, degraded and then
the possibility of resistance.
The next question is how do we overcome resistance. Some early lab work
, we recognised that though we can generate ABs to the same target, they
might have different activities , definitely the case. We classified as
type1 or type2. The type1 do clustering in the membrane, R causes
all of the AB molecules to cap together on the cell surface .
Type2 stay all around the periphery of the cell. There are othe rdifferences
but the key point is when you go back and do the internalisation stage ,
the AB gets internalised. Toxitumolab which is a type2 doesn't
do that anywhere near as much. The second AB doesn't internalise
as much and our hypothesis was such that the internalisation was the
reason for resistance. We tested whether the type2 ABs might
be better. the first assay we did was to see whether we could
delete normal B-cells better in the CD20-transgenic
Deletion on 1 axis and repopulation , red type1 ABs, blue type2
ABs . One deletes better and for longer the type2.
A tumour model so a transgenic human CD20 cross to a tumour
model and looking at the data the type2 aB can delete better
and for longer. This is what we do in the lab but ultimately
someone has to test this in the clinic.
So the human model
Opatuzamab a typ2 Ab , given a particular chemo versus R
and the same chemo. A difference in the survival curves.
Improvements each time, learning and understanding what the
different mechanisms are, to improve that.
The idea of pimping your ABs, based on the effector function that you
think is important. Macrophages are important, the Fc receptors are
important. So we would improve the Fc so it binds to Fc receptoras
better. Some people are using ABs with other types of drugs,
small molecule drugs, targetted drugs rather than chemo.
BTkyanase ? inhibitors is one and the data is pretty convincing.
R on its own , doesn't cure these patients, but give a PI3kynase ? inhibitor
its much better. But you have to undersatantd the mechanism and which
disease to apply them in.
The new kids on the block are immuno-modulatory ABs. ABs that don't work
by targetting the tumour cells directly, they engage with cells of the IS.
They bind to an immune cell , stimulate it and then the immune cell goes off
and kills the tumour cell.
THey can be agonistic , meaning they stimulate receptors or they csan be
blockers , meaning they block an inhibitory receptor thats already been
upregulated. Both are trying to do the same thing, trying to boost an
existing anti-C immune response. By predominately increasing
T-cell responses . THere are 2 arms of the adaptive immune system,
one is the humeron ? system that is ABs and B-cells and the other arm
is T-cells. T-cells have a receptor on their surface , the T-cell receptor,
and they work like ABs. Similar sort of diversity, can recognise
lots of different molecules and if correctly activated , go out
and seek out a tumour cell and destroy it, they don;t
need another AB molecule to do that. Fc receptor system have 6 , the
T-cell system with an interacting partner called an antigen-presenting
cell. On one side of the diagram the positive side to stimulate the T-cell
to go of and kill something. On the other side are the negative
regulators, the things that shut T-cells down . The tumour cell
upregulates all the negative regul;ators and down-regulates all the
positive ones. Because its rtrying to hide from the IS, so we try
and reverse that. We try to restimulate the receptors and not the
others. Being AB people , we use ABs . So a whole host of them
for upcoming trials, for the activators and another host of them for the
inhibitory molecules. The rewarding thing, is the kind of responses
you can generate. A response in Melanoma , a patient with a big
lesion , given the AB interlumamab an immune-modulatory AB
, by week 16 obviously downsizing, and the patient remained
tumour free. This is the hope and the dream of ABs.
The reality is, most patients aren't sucessfully treated with that
particular AB intralumamab but some of those patients who
are successfully treated can have long-term remissions , to the point
of them being cured which is not what you see with chemo.
Where you have people coming back with regressions.
If you are in the cohort treated with infralumamab and you are
tumour free by 2 years, you are likely to go out to 10 years.
This is partly due to the IS having memory. We can generate T-cells
to generate memory, then they are there and ready to fight off C
,if it comes back. No othe rtreatment like that, chemo is given
then its out of the body and if the tumour comes back , its not there
any more to do anything. A T-cell has memory and can come back and
fight the same tumour. Its considered that infralumamab is cutting the
brakes. There ar elots of signalling pathways going between antigen-presenting
cell and a T-cell, so making it activated. Tumours stop this T-cell
becoming activated. It upregulates the receptor called CTLA4.
In the normal part of immune regulation , you want to sometimes
stop T-cells becoming too active as you get auto-immunity.
So we need to be careful about stimulating T-cellls too much.
Tumours co-opt this system and they artificially cause the upregulation
of the CTLA4 molecule. We come along and block that interaction,
so releasing the T-cells fro mthat suppression.
Movie of the T-cells coming round to find the tumour cells, they
are dynamic in their search. Like an NK cell , they punch cytolytic
granules into them, and the tumour cells die. T-cells are serial
killers, they keep on searching and destroying tumour cells.
These drugs have only been licensed the last 3 or 4 years .
T-cells are regulated by the tumour , we sort of knew anecdotally
but we now know in a lot more detail .
If w ehave lots of T-cells in a person's tumour, that is a good indicator
that it will work. Its showing the T-cells can recognise something
but they're being suppressed by the tumour. If we can intervene at that
point the T-cells can then kill the tumour. There ar elots of different things
they can recognise on the tumour . One thing they recognose is
immutonome , bits of the genome , expressed as proteins that are different
between the normal host cells and the tumour.
C is essentially a disease of mutation , so differences between normal
host cells and the C cells and the T-cells can recognise those differences.
We are at the beginning of this, trying to understand how all
those receptors interact, hoew tumours do things differently
and trying to work our way thru , to find the best treatments
for different types of C.
A cautionary tale. Switching on T-cells is good , we want to kill
tumour cells. But switching on T-cells , across the body, can
be dangerous. If we just switch on T-cells , without anty
regulation , they can attack cells of the host.
With different types of immuno-modulatory ABs , a certain
proportion of people have adverse side-effects. If we use anothe rone
, with a different type of blocking molecule and then combine them.
We can get an incremental increase in response , but we also see
an incremental increase in toxicity. So its a balance of understanding
when and where to use the different interventions. This is the path
where we are at the moment.
ABs can interact in this complicated system , the potential to
reverse the exhaustion? and they have great potential but starting to
combine one AB with another, increases the complexity exponentially .
In haematology particularly its complicated and using against solid
Cs , lung, breast C , it is tricky. Bu t the only way to understand
even better , is understanding the tumour-host cell interaction
in much more detsail. This is what we spend a lot of time doing.
Combining direct-targretting ABs with immuno-modulatory ABs
is what we are excited about and is one of the trials we've just got
running in Southampton. A big team here, just in our
AB and vaccine group there are 50 people , not including the patients
who give us the samples, the nurses and the clinicians who
provide us with primary material, which is critical to what we do.
The tumour often metastacises and there wil lbe further mutations
and then the later mutations can be resistant to a certain extent ,
is that the case?
Very much the case with chemo , a type of treatment to induce apoxtosis,
to kill cells. Lots of ways a cell can modify itself so its resistant to that kind of
thing. So for chemo to work, needs the cell tobe in cycle and proliferating.
So firstly you select the cells that are rapidly proliferating , which lots
of cancers are doing. What it doesn't then do is kill the ones that are
dormant, because its not part of that process. There are multi[le
mutations that people have documented , that are the reason
why some chemo doesn't waork. D53? , the Guardian of the Genome
, a massive regulator , it recognises DNA damage which a lot of
chemo induces. Most Cs have some form of dis-regulated P53 ,
by mutation or by another mutation. If a tumour has escaped the
IS, from a reasonably large pool of tumour cells in the
fisst place. We've learnt in the last decade, that every tumour
cell is diferent, in any individual, either genetic or epigenetic
, a minor mutation in one particular clone which is 0.0001%
of a tumour . If that one does not get killed , then it gradually
progresses, taking on further mutations and it re=appears.
The clever thing about the IS , is that the IS can evolve
along with a clone, typically what happens. In terms of the IS
process , we look at it in 3 phases , mostly a host cell will
mutate , start on the pathway to a tumour, the IS recognises it
and gets rid of it. But sometimes, for whatever reason
the IS doesn't recognise it , it expands, takes on furthe rmutations
and then there is a dynamic balance between control by the IS
and the tumour trying to expand. That equilibrium is not a
problem for the host until it goes to the next phase where it
escapes, largely escaping the IS. When the IS cannot ewgulat eit,
it can expand and mestacicise to another site. All that is
disctated by genetics, mutation and epigenetics.
What proportion of tumours are knocked out by the IS and we
never know about them?
It must be really high. You look at every time a cell divides
there is a chance of mutation. Do such calculation and we would all
have died by C , by the time we got to a million cells or so.
We have billions of cells and we change them all the time. P53 ?
is one of those helping in the process, it recognises a mutation has
happened and makes the decision either to kill that cell
or stop it proliferating. That why there is massive pressure for
tumour cells to escape from P53. So the control by the IS and by
P53 are probably the 2 most important things to tumour developement in the
I'm assuming , for a clinician, there's a large computer system to for all
different variables and permutations , its getting beyond any human?
I'm not a clinician, they are clever, but not that clever. The way we analyse
these data nowadays . We've gone from a system of biopsy, pathologist
examining that, do a section, incredible insightfaul ability to
know whether grade1, 2,3 , recognising the type of tumour just from what it
looks like . Even really sophisticated machine learning algorithms
and supercomputers, can't do what patholofgists can do, yet.
But they will be able to , as somewhere along the line there is a pattern.
We are good at pattern recognition but humans have innate bias so a reason
why you would like computers to do some of this.
You can teach computers some of this and it does it quite well.
Thats just at the level of looking at a bit of tissue, then factor in every tumour
cell is mutated relative to all the others, in an individual, its incredibly
complicated. So if you can let the IS do your thinking for you
, because it can evolve with the tumour as it changes. We need to understand how
the tumour and IS interact on an individual level. Each individual has
differnt mutations , and each of those has a consequnce as to how it
interacts with the IS, how much it proliferates , its ability to metasticize .
Only by careful study and understanding of tumours can people
start to learn that. At some point you have to abandon one gene
, one understanding type approach, you have to have pattern
recognitin software looking for trends. We will guide them ,
as clinicians, in the concept of personalised medicine,
looking at their whole mutonome. How its changed, that is the
dream of 5 years ago, the mutation burden of any individual.
Now we have to do it on single cells, to understand how every
single tumour cell is different. Some of those will be resistant to
treatment, the ones we need to know about. At some point I
think you get commonality of process . Although sinlge mutations
will be different they are all impacting at certain nodes and certain
processes. So, this breast C patient is deregulated in pathwayA
and pathwayB, then we should rtreat him with this. If its patwayC&D
then with something else. So a bit empirical , I think that will be quite
a successful approach. We don't yet know which we should try
together, there are all those receptors on that pic and lots
of combinations to try. The issue we have at the minute isthe more info,
the more detail we know about how individual patients are different
the more difficult it becomes to then design trials. We think a number of
those things will be important, we are reducing the number of people
with a coherent set of mutations , together , to test. What used to be the
case, taking breast c, we treat it with thais and see how we do.
Then a meta-analysis , seeing some survived and try to understand why that is.
Once you've done that , you can design your next tier of treatment,
based on the 20% that did much better. For the 80% why did they fail
, what will we try with them next.
I've 2 members of my family who had R treatment and survive.
For me , someone interested in data, that no one has really followed-up
on their survival. No checking what their lifestyle might be etc. I ask this
of many of your colleagues who have presented this sort of research.
When might we see the impact of patient data. THe patients are very
keen , completely involved in the treatment. We have the internet
and daily data could be collected from them on their lifestyle and whatever else.
Without it , you have quite a crude test of mortality rate. Did they die, if they
die in England, then you get the data. If they move to Oz, then
you never hear if they die. Why is it not possible to enter into an
agreement with a patient , not all of them may agree. Thatt he
reciprocal nature of what you're doing to save their life, they need
to participate in providing regular data. So salt intake , or
holidays , where they live or whatever can be fed back to you. In the
past this sort of data could not be collected , but it could now be
collected to the nth degree. ?
It all comes down to money. The NHS is under pressure, just
for giving the drugs. R took 20 years to develop.
Say NICE put this drug outside your limit, however Mr Roberts,
if you're prepared to daily , via the internet, and answer 100
questions, then you can participate in the trial,
but we're not collecting?
One is trial and the other is treatment. Within a trial, the data is
reasonably well collected, money provided to take samples.
In other diseases, done much more holistically. So the MRC
funded 100,000 people just to be monitored, blood pressure,
lots of routine blood tests and then a population study to see what
of those people get a disease . Very expensive .
But if the people generate the data themselves, via questionaire?
In terms of resources, I don't know who would push to
get that data collected and who would pay for it.
NICE is already making decisions not to fund drugs that will
extend people's lives , because of cost. That is something that could potentially
extend someone's life. We have discussions with drug companies about
what is the benchmark , the lowest level they can afford to sell
and similar discussions. I don't disagree with you , the power of data
is enormous. Some of that data gathering is exactly what we do , looking
for patterns, in things we don't necessarily think to look for.
But its a masive thing to do, just to collect that data. The 100,000
genome project is starting to do that for multiple Cs. 10,000
lymphomas in terms of mutation, but I agree they're not
collecting patient data. Technically the NHS should already have that data, but even
accesssing that , as a scientist who wants to access data , is a problem.
Isn't Google there already, R and what it does?
Yes but not at an individual level. If I asked our clinicians for all
the data on 20,000 people treated with R , the answer is no
because of data protection. I agree with you, if at the time of treatment
people signed things to say they were happy for us to use their data ,
that would be fantastic. Currently we have ethics committees
worrying about samples and access to them. It is a real
problem , and something wider society struggles with.
What are we goiung to do with this data , when we start sequencing people's
genomes. We are not looking at individuals but we have people
in ethics boards that make it very difficult to access such data on the
understanding that the general public find it a problem, but I
doubt the general public consider it a problem. When being treated , they're
very willing to engage and would like to help in subsequent research.
There were leukaemia clusters , is that still true?
There always was hotspots and I don't know if thats been
resolved. Radon was a suspect one time, or living near
power cables etc, stil lnot resolved as far as I know.
Is the legal and ethical constraints exclusive to this country,
or also USA , France and Germany etc?
Not so sure about the USA , at a higher risk of litigation.
There is a different dynamic in the US , where you're paying a
clinician in a much more direct way than the UK. You can buy
a particualr treatment, and you as a patient are more involved in that
selection. Here you have standard treatment, if that fails , you can enter
a clinical trial, if you wish ot do so.
Have you heard of Avalon/Babylon? Health, an AI
engine run by IBM. It is intended to diagnose 100,000 ailments
, collecting data from people who are suffering or asking questions and
synthesises likely diagnoses, and it will happen?
Those kind of providers are the sorts of people who could set these
up. For them its demonstrating the poweer of their supercomputers,
so a vested interest there. Whether the NHS or us should be funding that
, or C charities, I'm not sure.
The general public is concerned about their data and the likes
of Facebook, and such concerns are feeding into9 the
workings of ethics committees. There have been cohort studies. Since
the 40s or 50s there have been children , followed thru, looking at
their lifestyles over lengthy periods. We're in such a good position
now to get more data, as we now feed it straigh in, not via
paper . There is still so much biomedical research rather than
social research which then may suport diagnostic work.
A friend of mine , his wife had something , eventually he got her
clinical records ,foot high, scanned them in and put the whole lot
on the internet for access by anyone who might read and learn
Presumably these treatments ar einjected? and how often?
Yes but some ABs are given subcutaneously now. R has been
reformulated and can be given oncw weekly. They looked at the
dose . A body of research suggested that rather than
internalisation, there is a second process called tromacytosis? ,
where when the IS macrophages get full to capacity , they stop
engulfing the tumour cell and they start chopping of fthe receptors,
which would then leave the tumour cell alone. A second school
of thought was that that would be a problem, a way around was
to give less AB initially so the system never got over-saturated.
That trial stopped early as it became clear that giving less AB
was not a good thing. A lot of clinical practise was based on what was
successful before . When you start into patient studies , you start to get
into millions of poundes. Doing what was done before , although the
AB is different , but its already a procedure , everyone is signed off on it
and it seems to make sense. But its not necessarily optimised for that
particual treatment. If a trial works , thats it. Some people measure the
half-life of ABs , they are good in hanging around for a long time
. The average half-life of an AB is 21 days , so its not like small molecules.
Small molecules get turned over quickly , in hours, so you need to
take a tablet every day or whatever. With aBs weekly or monthly
injections are possible. Particularly lymphoma, treatment R only and no
chemo , that is given initially weekly , then maintainence shots
every 2 months. Now people are doing more modelling studies ,
to get some analysis of the aB half-life , then mathematical modelling
to say when the optimal retreatment time is and its dosage , for that
particular AB. We're developing technologies to extend the half-life
even longer . Talking of canonical/n0n-canonical Fc parts, you can
change that Fc , so it goes out to 60 days rather than 28. Then those
technology changes have to be tested in the clinic.
Immunotherapy effective against solid tumors or just ? ?
That relates largely to how mutated the tumour is. Things like
melanoma have lots of mutations, so that gets treated quite
successfully with some of the immuno-modulted ABs. The
lymphomas we work on less ? mutations , so there we use things like
R as immuno-modulatory don't work so well there.
Its based on mutation burden , that why for some things like lung
C , actually having smoked increases the chance that you
will be successfully treated because you have certain types
of DNA damage from smoking, that are recognisable by the IS.
It seems unfair , but that sthe way it works, the correlation exists.
You mentions MS as well , in the media recently, getting approval?
I think its now at phase3 after a couple of phase2 trials.
For other diseases, lots of immunologies polarise to
certain individuals , thinking their particular cell type is mor e
important and the drive rof all diseases. Such goes through
phases. One moment everyone decides dendrytic cells are the most
important, then T-cells the most important. I work on B-cells and I
think Bcells are the most important. For a long time people thought
B-cells were a bit stupid and all they did was make ABs, that they
were'nt involved in likes of regulation. Its only when people
started deleting B-cells with R , they began to understand they were much more
involved in the regulation of the IS. It should not be surprising as
all these cells talk to each other, in feedback loops etc.
I find it surprising with MS though. Auto-immunity made a bit of sense
, ABs might be involved somewhere in the disease, you're getting
rid of a cell type that is part of generating that auto-immune process.
With MS it was not so obvious a jump. I don't know how the first
person gets treated with that, I'm not sure there was a huge basis
but it all looks interesting. As people look into it in more detail
there are extopic organelles and lymph nodes in some of the
disease sites, which may be affected by deleting B-cells. Or it might
be something secreted by b-cells , that snow been removed
and so less of an effect.
A b-cell effect?
Thats what you're doing deleting B-cells , that not necessarely negates
that your doing something to T-cells ??.
Its not part of something that seems to have been around for the last 20
years or so , repurposing of drugs like Thalidomide suddenly found a
completely different use. I'm assuming because of big computer systems that
can pick up such unconnected things?
One of the other AB drugs Alantuzomab? used originally
in chronic lymphocitic leukaemia and works for a certain
proportion of patients , that have particualr mutation status ,
that has now been repurposed for MS at a much lower dose.
This again comes back to the questin of why that was done,
it was a decision made on population size . They were treating a very
small proportion of patients quite succesfully , not making much money.
They've repurposed the same drug and taken the anti_C drug , no longer
available on licence, but they are using it to treat MS which is a much
larger pool of people. Again that is outsdide of our control,
its a pharmaceutical company decision. Then CRUK might come in
and treat what is a rarer di=sease and licence a drug for that
Is proprietary strictures a problem in this research area?
For R , the patent is now off licence, so people are making it
in India or wherever bio-similars. It was patented before it was
licenced, less than 20 years . Those ABs exist, and a big area
of research involving bio-similars, how ABs are made and
produced. They are not like simple small drugs, not compounds
with a couple of rings, 150 Kilo? of proteins , a carbohydrate that
is not identical in every molecule , a lot of range. Normal human
ABs and look at the carbohydrate, there is a huge range in
how big they are, what they're composed of, it changes during
pregnancy , a lot of modulation. When you doi a monoclonal
and make it in the lab , its still an issue that 1 cell will go thru
the carbohydrate bio-synthetic route slightly different
from another , depending on what cell cycle its in , what nutrition
it has , all sorts of things. A whole area of QC and validation
of what makes that drug, what are its parameters of use.
Aggregation of ABs is one thing they control for , carbohydrates
have to be within a certain limit and a field of work around that.
But then if you want to start treating patients with a bio-similar
then technically, but the laws ar echanging and evolving, you have to
do new full-randomized clinical trials. Any drug company has the next drug
in waiting for the next 20 years. As long as they can demonstrate superiority
, lots of governments and health initiatives buy from the original
source. At the minute lots of ABs are coming on-line at the same time,
lots of big pharma and small bio-techs , see that its a huge market
with potential of having a block-buster drug. They're all crowding into the
market. So 10 anti-PD1 ABs which probably all do the same thing
but only 1 will get licenced for a particual thing. Then someone else
wil lget their licence for a slightly different thing. But probably working in
the same way as the simplest type of AB are those that block something,
because you don't have to engage the IS, or involve lots of complex
biology , just have to stop one thing binding to something else.
If that is the action , then almsot any AB wil do that. As to one being better than another, no one will ever do the multi-million clinical trial
to show A is 0.1% better than B and get it licenced.
Simon Saggers of Solent Uni: Some Current Developments In Robotics In
Manufacturing And Associated Teaching Methods
Tuesday 18 September 2018
Given recent developments, how might we best train / educate the upcoming
generation of engineers who wish to pursue a career in robotic manufacturing
I am a senior lecturer in engineering at Solent University,
teaching a variety of topic areas within engineering, to undergraduates on our
various engineering honours degree programmes.
Primarily I teach topic areas relating to electronic engineering, and
mathematics, but also some areas relating to mechanical engineering, control
engineering, and robotic manufacturing, too.
A little background on Industrial Manufacturing Robots
Overall structure/type of manufacturing robots.
Articulated - the most commonly seen in manufacturing, particularly
Rotary joints, typically less than 10, but in principal could be more.
Each joint corresponds to a degree of freedom of movement (or axis), and
common robots of this type tend to have 4 to 6 axes, could be more.
Cartesian commonly seen these days in 3D printers, laying down material
in a layered structure - 3 prismatic joints
providing linear x,y,z, motion control.
Cylindrical - Cylindrical working envelope controlled by a combination of
rotational and linear prismatic joints. Move up and down, rotate
around and usually the end-effector could move in or out.
Used in small scale pick and place applications. So like a series
of test tubes , where an automated testing process is being
Polar - Similar principle as above, but with a spherical working envelope.
Not truly spherical generally , but similar combination of rotation
and translation joints.
SCARA - Selectively Compliant Arm for Robotic Assembly. Approximately
cylindrical working envelope, with parallel joints that provide selected plane
compliance. Rotation in one plane of motion plus up and down usually
for the end effector. Typically just 2 positions , up and down, requiring
accurate layout for the workspace, so it can actually reach everywhere required.
Delta - Jointed parallelograms, looking like a spider
often grouped together on gantries to
perform complex operations on a workbed below.
Domed work envelope with precision movements.
We are starting to see deviations from these standard categories these days.
So the bits inside a robot.
Robotic Joints control of that ,tending to be
actuators. Usually like Servo Motors or Stepper Motors, in actuality it is
usually more complicated implimentations.
For a servo-motor setup you set a control signal , essentially telling
a joint a particular angle to move to, perhaps 37 degrees from the
starting point. Usually that would be a Pulse-Width Modulated signal,
denoted by the duty cycle and mark/space ratio .
That goes thru a summing junction or comparator into an operational
amplifier , driving a motor . Most crucially you'd then have
possibly via a gearing system
Feedback Transducers (e.g. Tachogenerators, potentiometer, IR, etc.)
actually measuring the angle of twist of the joint in question.
Then use the comparator to determine the difference between
the angle asked for and the present actual angle. The difference then
adjusts the output to the motor. A looped self-correcting system
designed to bring the output as close to the required angle as is
feasibly possible. This can be improved by including PID (proportional integral
derrivative) loops into the system. Where the proportional part is doing much
like the previous comparator . The integral part integratees the error
signal , the area under the curve, and applies additional corrective signal to that
smoothing out lumps and bumps. Move to 37 degrees and the output may
wobble about a bit before settling on 37. Most of that wobble is removed by the
integral and derrivative action. Gives a signal proportional to the rate
of change of the signal. Even more complex processes can come in here.
Anothe rapproach is stepper motors. Although its an analogue process,
with appropriate PID you can get precise control with low errors
but fast response time. Ask a joint to move a certain amount , it will
move that amount reasonably quickly , with minimal overshoot and
minimum wobble , having got there and will contribute ot the overall
accuracy of the behaviour. Repeating on all joints , gives a reasonably well
controlled robot. Stepper motors are multiple coils inside
the motor, you can pulse different coils and move a series of small
increments. They can be used open-loop ie no feedback from
knowing where a joint has got to. With older robots you can
often year a clicking or buzzing as it moves during the engagement
of stepper motors.
So sending out or sending back signals would be Digital to Analogue
converters DACs or the reverse ADCs. Some sort of digital
control that is giving a numerical value that represents a point
in the range of motion of the joint. Say you have an 8bit binary controller.
So numreical values range from 0 to 255 , 8 binary digits.
If the range of a joint was 270 degrees , then divide that by 255
equally spaced rotational points in that range of motion.
A simple DAC, a series of resistors with values weighted according
to the binary scale so starting with value R, the next would be 2xR, then 4xR
. So if you use 5V to represent logic 1 and 0V for logic 0. The amount of current throught the
resistor would be weighted by that resistor value according to its corredponding
binary value . So all the 1s in your binary signal represented by 5V
weighted by the weighted R values, fed into a summing amplifier
, adding all together. So what starts as a binary signal
becomes a varying analogue voltage and drive the motor of the joint.
There are now flash converters , using multiple versions of this
in parallel and other types as well. That is for sending a signal to a
To get a position reading signal back from a joint you will probably start
with an analogue source, varying voltage perhaps related to a varying
resistance , convert that to binary for the control software to understand.
So ADC first , feed that wiht a binary counter counting from say 0
to 255 bits . Each time it counts up, the ADC is converting to
a voltage. We compare the voltage we want to convert via a comparator
to a successivlely increasing value . When the 2 values are the same ,
you know you've hit the right binary value. The comparator wil lswitch
off that system and you have the corresponding digital value.
A simple ADC, more advanced versions these days.
Connected to this system of DAC/ADC/actuators /motors/feedback
there is some Control Electronics, defining how the whole
system fits together . We tend to use block diagram representations for
this. How contributory signals are processed, inputs and outputs.
A more advanced mathematical version of how the comparator loop
system behaves . There ar ea variety of time domain equations
, telling us how a quantity is varying in time. eg how a joint moves
in time , how a load is affected, a variety of signals are changing with time.
They often involve ordinary differential equations and we use Laplace
Transforms to convert those ? terms. This allows us to design control
electronics to meet mathematical specs .
Communications Protocols (e.g. RS485, Industrial Ethernet, MODBUS,
When you ask a robot to move to a particular spot , it doesn't do it
perfectly. So want 37 degrees but it won't necessarily
arrive exactly at 37 and it will not arrive at that exact same point every time.
So arrange a joint to move to a position , go back to its start position ,
repeat over and over again. There wil lbe a small amout of
variation which tends to follow the Gaussian Distribution curve
or bell curve, or Normal Distribution.
The probability of the joint arriving where we've asked it to
be is quite high but its not the whole graph, there are other
probabilities around that. Stil la low probability , not 0 not impossible, of it arriving
at a point some way away from the desired point.
We can calculate this probability and use that figure in a meaningful
way to design the workcell. We'd start by calculating a mean average for a joint
variation . Perhaps setting up the experiment described before, ask it to repeatedly go to a point/start/point over and over again, each time
measuring the deviation, giving a table of deviations, from that we
calculated the mean deviation and the SD. Those would go into
one of the probability equations , then use such as integral calculus to
work out the probabilities corresponding to the curve unde rthe graph.
Or more realistically we'd use tables of data . Its all part of the
definitition of what we mean by Repeatability Of Robotic Joints.
A crucial figure, specially in manufacturing. ie repetitive tasks
where one of the most crucial factotrs is, its done the same way each time.
Part of the reasoning of replacing a person with a robot in the
first place , should get less variation if all is designed and implimented
correctly. Getting that figure for 1 joint, gives you an idea
of a cumulative figure for all the joints in a robot
Repeatability) is defined as 6s Where. s = (Standard Deviation).
So if w ewant to arrive at 37 +/-1 degree , we'd hope that at least
99% of the area under the curve would be btween 36 and 38.
Also Control Resolution = (CR) = (Joint Range)/((2n)-1) , Where. n =
Number Of Control Bits. A measure of the fineness of movement of a joint.
Also Accuracy = ((CR)/2)+ 3s, joining together the concepts of
repeatability and control resolution. Al lvery well saying move to 0.001 of a
degree but how accurately and reliably it will do it. What is the biggest deviation .
Also Spatial Resolution = 2* Accuracy = (CR)+ 6s
Relatively small figures for repeatability mm or a fraction of mm for a good
We need other things to use alongside robots to use them meaningfully.
Work Cells, combination of a robot and surrounding peripheral
devices or assembly lines ,it will interact with.
So things like Peripheral Transducers and Actuators, other ones not just in the
robots themselves. You might want the robot to sense other things in its environment and react to them, so additional sensors.
Maybe also conveyors or Other peripherals like 3D Printers .
Over this whole process is the overall control Software , an operating system
, in the past something proprietary, specific to a maker. Now tends to be
more generalised. Also available are
SDKs, Software Developement Kits a piece of software that
sits between the OS and the Applications or programs to get the
robot to achieve a particular function. Also an
HMI = Human Machine Interface, allowing non-programmers
to interact with the robot.
Let us know consider a few examples of robotic manufacturing technologies
that have been used in the past
Perhaps one of first industrial robots used in a manufacturing environment
was ‘Unimate’, used by General Motors in 1961 to handle some simple
movement of die castings and welding tasks.
It was famous at the time as it was so innovative.
Invented by George Devol in 1954.,limited functionality but good for systematic tasks. Memory stored on Magnetic Drum, long before hard drives were around.
Appeared on ‘The Tonight Show’ with Johnny Carson, waving a conductor's wand to conduct an orchestra and pick up an instrument .
PUMA - Programmable Universal Machine for Assembly.
Initial concept design invented by Victor Sheinman while at Stanford
University, then developed into the PUMA series in 1977 in conjunction with
General Motors. (Model 761, is a later model circa late 1980s). Used up to the
1990s for some applications.
Robots of this era tended to incorporate integrated joints, the actuatoors and sensors inside the joint.
Some could be linked to and controlled by ‘modern’ PC computer systems.
Programming of functions for this generation of robots tended to be
joint-space focused, using coordinate maps.
To program a robot to go thru a sequence of movements often the
coordinates would be in joint-space map. Its like a s[preadsheet,
each column , related to 1 joint and each row to 1 position.
So if you wanted one pose of the robot , you'd specify yhe angle of each
joint . That would correspond to 1 row of the spreadsheet. If you had
multiple rows, ie multiple different positions, running through those sequences ,
the robot would go thru the same sequance. Then
‘Teach Pendants’ become commonplace, hand-held devices you could
fine-tune the robots positions.
Range of available peripherals and communication protocols increased.
Programming languages tended to be bespoke & proprietary, moved to
more standardised protocols , that would work with different pcs and otehr
technology. An RS485 port on a robot would have been fairly common
around then that could communicate generally.
PUMA had a particular application software.
The Baxter robot Created by Rethink Robotics in 2011.
Marks a significant change in the key features and structures of industrial
robots. There was much more variation outside of manufacturing.
It has much more capability for sensing its environment , than previous
models of robot. A series of sonic sensors around the head , to create a sonic
field around it , to detect objects entering its proximity. It has
cameras on the head and wrists, also IR sensors on the wrists. Ports on the
back to connect all sorts of other types of transducers as well.
The software has interaction baked-in , it is collaborative,
including emphasised by adding faces. Face changes to express basic
type human emotions, for a sense of rapport with human workers around.
Robots of this era often incorporate some basic AI capacities. (e.g. for Inverse
Kinematics Solvers where Ai might be used). Foreward kinematics
means you specify a particular position you'd like the robot
to move to by specifying the rotation of all the joints. Inverse kinematics
you tell it where you'd like it to end up and you let it work out for itself
how to get there, for an optimal or most efficient path for the end-effector ,
for a task. IK can use AI techniques , not Turrin Test stuff
but kinetic algorithms or inference systems or fuzzy logic, or
neural networks, that can mimic learning-like behaviours, like a
living organism can. Extremely useful for solving certain kinds of
task. Awareness can be incorporated by simply importing a CAD
model , of a 3D printed object say or a drawing of it. That standard STL
file can be fed to Baxter and it would understand the geometry.
Or it would understand how to avoid it, ie not try and pass through it,
but around it. Such awareness has been innovative.
Baxter can be programmed to perform movements in Joint Space or a variety
of other Phase Spaces (e.g. Momentum Space), how the momentum of the joints ar emoved , or the velocity of all the joints, in applicatins where speed or
momentum are crucial, or the applied forces , so much more flexibility.
Movements can also be specified in the form of Quaternion operators
to specify joint movements.
More efficient, requiring less numerical values and avoids ‘Gimble Lock’,
where 2 or mor eindependent axes of rotation can get
synchronised together in a detrimental manner. or more conventional Rotation
Matrices or Roll, Pitch, Yaw, or X,Y,Z providing flexibility in how movements are specified.
Robots of this era often are directly integrated with Vision Systems, allowing
visual feedback, via cameras, from the robot’s environment, which provides a
wealth of possibilities for how the robot may be programmed to ‘recognise’
and interact with objects eg pick it up, from integral vision rather than
a separate vision system. From simple ‘edge detection’, to determining key
geometric features of objects, and adapting interactions accordingly.
A range of other transducers, including IR, and sonic sensors also allow the
user to program Baxter (and similar competing technologies) to be more
‘aware’ of their environment.
Crucially, with the appropriate software, this combination of integrated vision
systems and other transducers, along with the innovative design of the robotic
joints themselves, allows for ‘compliant’ operation, making the device
(relatively.) safe for inter-operation with humans. Instead of stopping dead which can still cause problems or of course ignoring it and causing even
more problems. It can slow down gently and come to rest next to the
object. The object could be a table or a human .
Prior to Baxter types, robots were often caged-off , because of lack of
awareness of what is around them.
A huge leap forward for human-robot interaction in industrial environments.
Robots like Baxter are hence among the first generation of Industrial
Manufacturing Robots than can be described as ‘Cobots’ co-operative of
This allows for a more ethically sound and sustainable approach to
incorporating robots into small-to-medium enterprise manufacturing processes.
Workers can ‘train’ robots to perform repetitive and tedious tasks, hence
elevating the status of human workers in such job roles to a more supervisory
capacity, overseeing the robot’s operations for some specific set of tasks.
Industry 4.0 . The idea that industry has gone from the industrial
revolution as first stage, to assembly plant, to fully automated using robots,
now to robots that can co-operate with humans
Additional to industry usage, robots of the Baxter era are powerful research
tools in a variety of engineering contexts. For motion studies to
corroborate results from fluid dynamics or movement of objects etc.
This is due to a range of factors, additional to those already alluded to,
Use of general programming languages, such as Python & C, rather than
Baxter uses SDK/ROS (Robot operating System)/Linux/ (open source), can be distributed, without licencing
issues, to multiple users per robot, so that those users can develop software in
parallel offline, by, for example, using a bootable USB with this environment
A range of visual/simulation environments are also available for offline testing
(again, open source), and one can also connect to Baxter via standard
networking protocols, such as ethernet, allowing remote connection.
Otherwise each time you make a new robot , you must create a new OS,
all that work and you're just reinventing the wheel. So use an OS
that is independent, or at least partially independent of the
physical geometry of the robot , so it is platform independent, and can
be migrated across different types. ROS is open source
and new functionality can be added by anyone, others can make use of that
and everyone benefits. Engineers working in this field have more
transferable skills as learning on one robot can transfer to others.
It all runs on LINUX and you get Libraries of examples.
User and manufacturer interaction through development community.
Common platform (ROS) becoming widely adopted by many manufacturers as
a de-facto standard.
Research initiatives such as the ‘Million Object Challenge’, involving
collaboration with multiple Universities, particularly in the US and UK.
A database of standard day-to-day objects that baxter can be aware of ,
so all robots can have an inbuilt awareness of a million objects to start with.
Example code in Python, provided by Active Robots UK. How easy it
is to get started. You can remotely see how a Baxter would respond
to a test program, without having a Baxter. So many programs can be
run in parallel even if you have only one robot. Again the
simulation environmemt is open souce , licence-free.
Compliance because not just using servo or stepper motors but series-elastic
actuators now . This came out of MIT from Mathew Williamson
a founder member of ? Robotics , his knowledge adding to
Baxter solving rubik’s cube.
A demo of flexible and dextrous task. The cube solving is
just standard cube solving algorithm, the impressive factor is
it integrating its vision system with its joint
movement. There are other robots that can solve a Rubik blindingly
fast , but more dedicated robot.
Given recent developments, how might we best train/educate the upcoming
generation of engineers who wish to pursue a career in robotic manufacturing
Modern manufacturing techniques and robotic technologies are dynamic by
their very nature, and subject to rapid change and evolution and sometimes
ROS and use of general programming languages benefits manufacturers, users,
and engineers working in the industry through transferable skill sets.
Problem Based Learning has been shown to be a highly effective method for
enabling engineering students to obtain core problem solving skills, in a variety of areas of study, and, crucially, can help them to develop a robust approach to learning that is adaptable to change.
This skill set is highly desirable, and transferable.
There are many current pedagogic research papers supporting this approach.
At Solent University we use a combination of traditional approaches (where
appropriate), and also PBL approaches to teaching and learning.
In particular, undergraduate engineering honours degree students in their
2nd and 3rd years of study undertake some group projects using our Baxter
robotics lab facilities.
These projects allow students to grapple with a challenging and realistic
problem, involving creating a manufacturing work cell, consisting of a Baxter
robot, a 3D printer, and other peripheral devices and assembly stations of
their own design. A lot of design and programming goes into this.
Involving real-world problems that occur in industry.
This work cell must produce sub components, which they must design to meet
an overall product specification, that we provide, and the Baxter robot must
participate in this process, and must be programmed to sort between multiple
sub-components, to identify, and quality check them, and then to assemble
them appropriately, and to finally present them as a finished product.
Although these projects are group-based, individual roles and responsibilities,
with distinct areas of focus, are assigned to each student.
Each domain of responsibility includes an opportunity to solve some distinct
and significant part of the overall problem, so that there is room for each and
every individual student to demonstrate their skills in. problem definition,
analysis, problem solving, testing, experimentation, adaptation, programming,
, fault-finding, repair, optimisation, and reporting.
The project is a summative assessment, it goes to their grade, but prior to commencing the project
work, students must first complete a suite of formatively (does not contribute to their grade) assessed laboratory
activities, which build their skill set with programming the Baxter robot, and
general problem solving in a robotic manufacturing context, and also help
them to develop an understanding of the capabilities, principles of operation,
and limitations, of the hardware.
This all builds upon underpinning applied engineering mathematics and
physics capabilities that they have already acquired and demonstrated in their
first year studie,exams,assessments.
The key point with this process, though, is that, although we provide
appropriate support to the students, during the labs, and, in a supervisory
capacity during the projects, the actual summatively-assessed project work
is all down to each individual student to manage and to work out for
themselves, with only minimal and appropriate guidance or support from
lecturing staff along the way, which is what all the literature on PBL
says. Regular formative feedback, in a manner
consistent with the recommendations of current pedagogic research as to how
PBL should be carried out in this context.
Also, whilst students get quite a traditional set of taught materials on
programming in other units of study, and will have learned to be proficient
with C/C++/C# prior to undertaking the PBL units in question, they are
expected to take some responsibility for their own learning of Python, ROS,
and the Baxter SDK, for the PBL units again, with appropriate support from
Students are also encouraged to reflect on their own learning process
throughout, and given regular formative feedback, in such a way as to
encourage them, not just to simply learn python, and a particular associated
SDK/OS platform, but, rather to learn how to learn new programming
languages/SDKs/OS platforms, by their own self-directed methods, gathering
their own supporting resources as they do, and evaluating the suitability of
such resources, all within the overarching PBL context. This is something
employers regularly tell us, and from our own,as lecturers, industry experience. On the other hand we don't let them flounder on their own.
The idea behind this is that no student successfully coming out of these PBL
units of study should ever be likely to have any significant difficulty in adapting to future changes in these kinds of technologies, in industry.
A desirable skill set for industry indeed.
Outside of manufacturing and robots, I was thinking of future decades and the
care industry and shortages of carers. Has anyone been developing a sort of soft
robot in the sense of simulating muscles for arm movement . I don't know what its
called biologically , if I close my eyes , I can place a finger on my nose because of
some sort of muscle sense , built in?
Related to that , in the recent Ebola crisis , I saw there was a company
intending to use Baxter in that capacity for handling contaminated
material. Robotics in medicine has been around for a while
now. Robots that can perform surgery. The idea behind that is they
would be guided by an imaging system such as MRI. But it would be
operating on a patient and be near a high magnetic field , so
no metalic parts in the robot. But for care you would need a softer
interaction, more humanized.
Your not aware of anything like biological muscle , contraction
only and always balanced pairs as actuator?
I'm coming from manufacturing rather than healthcare context. I'd like
to know about such .
To get someone out of a bed of a morning , you don't need mm precision,
you don't really need cm precision?
There are robotic hoists , roboticaly controlled , but a long way from the
sort of robot you are outlining. I know in Japan there is a lot of need
for an aging population and they are doing something along
With a JCB and 2 hydraulic rams , simulates the human arm quite
well. If you were building a robot to do big stuff, the JCB mechanism is
perfectly good . Human muscle cells have position sensors
built in , so the brain knows where the muscles are, just build something like that
in there and part there.?
Hydraulic systems have much more punch than electrical systems?
Technologies like that exist in the form of exoskeletons, a large area of
In the manufacturing industry , can you foresee a neural base connection
coming in, rather fingers on keyboards? And could that replace
I've dabbled in a bit of AI stuff , but you're describing neural-linkage
to biological material. Neural links for very basic or niche tasks do exist.
Last week I read of some developement to help people with ADHD
, a game , designed to increase their focus for longer periods
of time with a trans-cranial neural link . So not probes going into
brain tissue, so something along the lines you describe. A few years ago
, perhaps Harvard, research to develop an artificial hypocampus.
Again to replace damaged brain tissue, not my area , but fascinating
to see such collaborative approach. Certainly a trend to make
robots more collaborative in the manufacturing environment.
It would be the next logical strp but I'd imagine it would raise some
ethical and security issues.
For the likes of industrial robots like used in Fards of Eastleigh when they
were working. Doing very repetitive jobs but I imagine the first
task of the day is to go to a reference point X,Y,Z , how often would they
have to return to that point .?
Again ties in with repeatability and the Gaussian plot. For X,Y,Z systems
they would often return to a reference but that is often due to tool
wear. A robot with an an end effector of some sort of machine tool ,
then it will often have to return toa station point. There would be calculations
built in relating to expectred tool wear, but it can drift.
So they can routinely accommodate change in room temperature and stuff
like swarf and dust getting into joints?
Yes. But as far as dealing with repeatability issues , its less about checking
something against a datum , more about designing the systems the robot
has to interact with, to cope with the maximum geometric
varaiation that you can in principle get , from variation and repeatability.
Say a robot had a repeatability of +/-1mm , which actually would be
quite poor by the way. You can envision the end effector has having a little sphere
at its tip of 1mm in radius . When designing the systems that robot will
interact with , in a workcell, eg a sorting station where its picking up
sub-components , if they are in full containers or hoppers , you might
make a small flanged area around the rim of each hopper that is 1mm in radius.
So the end effector would be pushed into the right area to pick-up regardless.
Related to that, but you made no mention, I assunmme is haptic
feedback. Can they have haptic sensors to somehow give a fuzzy feel to a contact.?
One thing that has emerged around robots like Baxter is a research
community around end-effector design. Precisely to introduce things like
haptics into the systems. And also to emulate more normal
human hand movement, for more dextrous tasks. As well as more way-out
end-effectors that are nothing like human hands.
At the end you referred to PBL , is that what I would have thought is
called heuristic teaching?, learning by doing or learning by
In a sense yes, but slightly different. It has to be deployed by people
not only with enough academic expertise in the field , but also practical
real-world expertise as well . In order to know when to steer
students in a particular direction or another. At the end of the day, all
things being equal, it would be nice to let them take as liong as they need to
but we do have to fit it in the standard academic time scheduling.
We do provide some guidance but within that they have to be able to
experience for themselves. We want them not just to learn the
subject material but also how to learn material. So if the material
changes is it is wont to do in this sort of field, they know
how to cope with that, rather than floundering.
It was developed in hypothetical reasoning in a medical context
but has since branched out into other areas including engineering.
There is a whole slew of papers on PBL for engineering
, what works and what doesn't work including social science aspects.
You mentioned collabarative , betweeen what 3 students together
on a project?
Yes , 3 would be typical. Each would be responsible for a different
part of the system but w ewould choose a system whereby everone
would have an equal amount , whether programming, problem
solving, maths/physics , experimental testing . We wil give
quite structured guidelines on what they need to have included
in that project. That way we ensure each student has an equal
but different opportunity . Equal in terms of what it provides them
as a learning experience, but different bits of the overall problem.
It also gets them used to working in inter-disciplinary teams,
important in an indusstrial context.
Is that quite rare in the educational sphere, not just Solent Uni?
It is relatively newish , but the slew of papers demonstrate it
is not rare. By no means is it the norm , lets say. Conventional
learning stil lhas its place but a blend with PBL can be
successful. We tend to do some pedagogal research around PBL
ourselves, having found it successful in implimentation.
W103 + S
Tuesday 16 October 2018 3 speakers from the West Solent Solar Cooperative,
the large Solar Photo-Voltaic Power Station at Pennington near
Lymington (2.4MW 11KV) : Covering the financial foundations for the
project, the construction and then operation of the site.
Anthony Woolhouse, chairman of West Solent Solar Co-operative
with 2 of the other directors, to share our experinece of doing this , how it happened . The red patch to the south of the UK map of solar
irradiance, is where our solar farm is. The coastal patch of our part
of the country is one of the best areas in the country to have a solar farm.
I think ours produces more than most. The field next to ours has lots
of geese on ther e. The biggest problem we had , during the preparation
of the project. English Nature read our environmental report that said there
was the potential for over-wintering geese ot land on your field.
We said , not so, because they land next door because there is a pond there,
which is also a filled in gravel pit. We unblocked this impasse by inviting them
down to our site .
The point of connection to the existing electricity supply. We saw this
connection being made, a man on a cherry-picker and clipped on our
lines from the farm. This 11KV line was in the field already, which
helped a lot. And it had been upgraded , the year before we found the
field. Even more helpful as you don't want to have to upgrade
a long supply line.
An aerial shot of the farm showing the IoW. Its about 12.5 acres
, sitting quietly in the landscape . We said to neighbours, it will
make no noise an it doesn't. There isa gap in the arrays
of panels to accomodate an underlying sewer line. We asked Southern
Water if we could build panels over their sewer line and they
said , well you can but you'd have to move them pretty quick
if we want to work on the sewers. So we left it blank .
There is also a straight gap , a sight line to the nearby house.
The house owne ris a member of our co-op but he did not want to
see uniform rows right across his view, could he have a gap there.
How do you find a suitable site. We asked an estate agent in
Lymington . He took me to a site but was down-wind
of a recycling plant with lots of dust in the air, not good when
coating solar panels. There was another field, but not on the market.
So standing on this field for the first time, thinking, its flat
, its south facing , its not overlooked particularly and no-one knows
its here. You can't do these things by yourself , so we formed a
board. They're all local people, 2 engineers , a corporate lawyer,
a sustainability expert who was sustainability manager for the
Olympics and someone who was head of the New Forest transition
group. I have planning and small business start up skills.
So we had most of the required skills in the board. Its about creating a team
that can do these things. The board is essentially the same now
after 4 years.
The site belonged to a family trust who made money from its
previous use as a gravel pit. Then it was filled with inert
construction waste. We do have some gas monitoring sites on our
field though. They thought we wanted to build houses on the
site. It took a while to acquire the site , rather that way than
having an external landlord.
We went to all the neighbours before submitting a planning
application. We met the planner on site , which was useful. We did the
planning application ourselves . The planners said we had to improve the
graphics as it was a bit Heath Robinson. We were also required to
get an ecological survey. In the end , no-one objected to
the planning application, nobody. For a renewable enregy project that
is remarkable. This was because we did a lot of consultation and
many people in the nearest area are now members of the co-op.
We ran out tenders to build the site and we chose a company
Solar Century one of the most established constructors of solar
farms. We needed 2.6 million , and were benefited by the Seed
Investment Enterprise Scheme and the Enterprise Investment Scheme
which gives tax-relief on investments. The government have now
taken that away, from community energy projects. We had open
days in Lymington and on the IoW. We said we were a local project
and the electricity is used locally. But our back-office is
is a company called Energy For All , in Barrow-in-Furness.
We were their first solar project , they had previously
been doing wind-farms up to then. BBC South was a great supporter of us.
They filmed us 3 times, once when just a vision , once when the first solar pannels
went up and then the first open day with our members. Solent Radio
interveiwed us and BBC Inside Out came over later on.
We used all the networks we had , so bee-keepers, quakers, New Forest
Transition, FoE , networks that each of us were in.
W neded 2.6 million and we raised 2.9 million in 6 weeks and had to
give 300,000 back. 55% of our members liv within 30 mils of th
sit. W also did a bond which pays 5% for 5 yars and 1/5th capital back in ach of th
5 yars, so one lth final yar of that. We are in a rtrirmnt ara and
saying to somone its a long-term projct , a 5yar bond did play play
well. All th bond holdrs are in Hants. A Quaker group in Soton wer unable
to put panls on their roof so they invested in us instad.
Its a commutity projctr and the first thing we did was to plant a hedge
, hard work as its not natural soil. We got the whips , plantd with
protcting tubes, from the Woodland Trust. We are about to remove
all those tubes now. Our hedge was not to hide it, planted it as a
wild-life corridor, trying to improve the bio-diverstiy. Our
ecological survey said, you could only improve the site, as it had
ben a gravel pit. We are working with the Hampshir and IoW
Wildlif Trust . We planted wild-flower seeds over the whole
site and thy describd that as maritime wild-life, coastal plants.
Cathy Cook, I'm an engineer on the board and one of the
foundr directors. I'd put solar panels on my own roof at home .
We are in th New Forest and so a conservation area and you find
its difficult to put solar panels on roofs because of their
planning regs. So perhaps a field outside Lyndhurst wher ewe
could put a solar array, but not vry likely. Anthony
came ot m saying he'd found a field , suitabl for a solar farm.
I wantd to proceed as it brings benefits in all sorts of
ways. So the tchnical stuff. The existing power line across the
field, we were limited . SSE said th network had been reinforced in hte
last year , meaning we could have a connection. If th lin had not been
upgraded , we would not be able to make any connection at all.
They said we could have only 2 MegaWatts maximum output.
So we had to keep it below 2MW, this was the first parameter.
We had 12.5 acrs and we could gt more than 2MW
from that area. We wanted a farm with capability more than this
limit. Graphing out power output vertically and time of day
on the x-axis. 2 curves, the inside one is for 2MW of pannels
so a total maximum of 2MW. The upper curve is for 2.4MW
of pannels , although we could physically fit 2.6MW.
There is a break-even point between cost and benefit of the extra
.2MW. Wintertim ouput and mid summer output. At 2MW
, mid summer May, June and July part of th curve is shaved off
the top. That is the only penalty for having more than 2MW
capability on the whole sit. It benefits us , by the shoulders of the
curves, mid winter and the othr months you ar gaining more
output. This ends up a big gain, although apparently illogical.
Its is these gains for the rest of the year and the rest of each day
Q: How do you spill/expel the excess?
Its all to do with the inverters, coming up.
We have 255W panels, the peak output midday high summer.
9372 of them. This was the fisrst solar farm that Solar Century did with the
panels not in portrait orientation but in landscape.
Again big benefits . With portrait , when hte sun is low
in the sky , and any kind of shaddow on adjascent pannel ,
that whole pannel, not just the shaded patch, cannot generate very much
at all. So if a bit of shading just at the very bottom of a portrait
panel then the whole of that panel is out. Put them in landscape
then you've only lost one panel out of 2 , if 2x1 aspect.
Its possible to increase by 25% , low sun, by doing that.
The inclination, the slope of the pannels for this latitude of the
planet is 22 degrees, the best angle for that and optimised the
separation between rows to 8m . So some shading in the middle of
December and very early morning throughout the year.
That is the optimum, beyond theat , you are spreading out the pannels so far
, that you are wasting pace where more pannels could be placed and gaining
power at other times of the day.
The neighbours - one house is quite close to the field. They'd lost the site of the
gravel pit and no longer huge lorries dumping construction waste ,
relieved by top soil and grass seeded. Peace and quiet is an important element
of all this. So putting a solar farm there, meant no houses could
be built there, no vehicles driven round, no industry on that site
for the next 25 years. As long as we keep it quiet and make it green
, the owners should be as happy as they could be. As a family they
were interested , included a son interested in matters ecological.
For the different designs, from the different tenders , some of them
had one big central inverter , the kit that turns your DC from the panels
to AC for the grid. Or lots of small inverters dotted around the
farm. When I was reveiwing the specs , given by the different
contractors , I was looking at the air flow the central big single inverter
room. It was an enormous flow rate that would make an awful
racket from all the fans. Going to the decibel ratings , it was something
like 65dB at 30m , any nearby householders would hear this all
night long. The board discussed this and a big central inverter was out.
And B all your eggs are in 1 basket , if anything goes wrong.
So we have 64 inverters , all rated 30KW maximum.
Thats the point , though we have 2.4MW of generation,
its the inverters that control the output, so never more than
2MW being exported.
The inverter controls the conditions that the pannels are operating in.
There is a maximum power point , every couple of minutes or
so , it switches off for a short interval and it swaps the conditions around.
In normal use you want to make as much power as possible
and the maximum power point tracker is always looking to get
maximum power. But when you are at maximum power and its
too much , then the tracker will move off the maximum power
point until it reaches the amount of power exportable.
It basically desensitises the pannels so the energy is never
generated in the first place, so not needing to be then thrown away.
So no heating or dumping of power required, its not developed
in the first place. All done by software.
Q: How does it alter the effectiveness of the pannels?
Its down to fundamental physics and the energy gap of semiconductor
materials. A photon hits a solar cell, an electron is ejected , and being a
semi-conductor it does not go back to where it was. There isan electric
field and the electron is pulled away, making the circuit. By changing the
voltage, that electron is more likely to return rathjer than go round
the circuit. When we had decided on the configuration, SolarCentury did a
simulation on software PV6 ? to optimise the separation between the
rows , panel inclination , sun angles, hedge height around the field.
They came up with an output per year of 2.5GW-hours per year.
Enough for about 650 homes . I think the biggest solr farm is 15M
in Leicestershire. There are 9372 pannels in all on our farm.
A flow chart of how the farm was put together. There is a lot of
wiring connecting all the pannels together. If end-to-end then
10 miles of panels, so about 40 miles of string-cable.
There are strings of 22 panels and each string is connected toan inverter .
As only 22 per string, its like fairy lights and all DC in a single
circuit. So if one goes out then 22 go out, you wouldn't want
150 panels to fail in one go , say. Then 6 or 7 strings going to
one inverter . Its a gathering process . About 10
of those will lead into a distribution board, a green box with
400V output and very heavy cable. So about 1500 panels
worth . There are 7 of those , they go to the step-up
transformer to go from 400V to the grid voltage of
11KV. Red cable now instead of black as 11KV , 3-phase.
This one is Aluminium, the others were copper, so a lot lighter.
It passes thru the substation which is like a safety switch, before it
joins the grid. THe mandatory piece of equipment , to feed into the
SSE network, about 90,000 GBP. This was the first item on the site and it
was so heavy the lorry sunk into the ground . We had the sand and gravel
neighbours send along a tractor to hault it out. It was the very wet winter of
2013-2014. If something goes wrong on the grid or on the solar farm,
this switch will trip out, requiring someone to come in and reset it.
Beside the substation is the metering cabinet , the onsite meter, with also a
wireless link to the people who pay us for producing the electricity.
The output of the substation is fed via 11KV cable up to the overhead
grid line over the field.
Q: Can the field flood?
No, it can get very squashy , no not a safety issue. The kit is above 3 foot
off the ground.
Q: Is the wiring above ground?
No its all buried , of suitable rating for siting underground.
The field was just green grass growing there on top of the inert waste.
The first works was pile-driving , to take the framework to mount all
the panels on. 1065 piles to be drilled 6 feet into the ground,
so as much beneath as above the ground. Done in just 3 days.
We wrote to the neighbours saying you may wish to go away at this time,
it would be 3 days . A well-oild machine , the crew raising the
framework took about 1 week. Fitting the panels took about
10 days. Then the electricians arrived , I don't know how they
manipulated the thick cables into place. Then 2 days before
start-up SSE came in and connected up , with the grid switched
off. About a 2-hour outage in a morning to do that. THen
go-live 27 June 2014. Al lthat construction took only
6 weeks. Everyone knew what the routine was and no panic,
all worked extrremely well. They finished 3 days before the feed-in
tarrif dropped, w ejust made it in time. There is a video on
our website , West Solent Solar, and down to construction movie.
Our generation is now above budget. Simulation showed us we
should have about 2.5GW-hours per year and we're actually making
2.78 on average over the 4 years of running , 11% more than
expected. Due to us being in a very sunny spot and because we are
right on the coast we always have some wind blowing, what with
sea-breezes and a funnel effect from tthe IoW. Even the hottest days
thgis summer , you could always feel a breeze. These breezes cool
the panels , to counter the heating from the sun. That cooling reduces the
resistance and increases the current flow.
Tuesday a week ago, our power output , perfect for a full
day with only a few dips from whispy cirrus cloud. Even in Autumn
we had a peak of 1700 KW. In contrast the Sunday after,
extremely wet . It just shows how it would not be worth trying to
generate at a very wet site.
This Sept we were 20% above budget, only 5 days in the month were low,
25 days of good generation.
How about when things are going wrong, how do we know.
Also maintainence when things are not quite right.
In the early life of the farm , someone noticed we were there
and a post-grad student needed a test-bed solar farm for a drone with
thermal inmaging camera and would spot hot-spots.
Some photos of the drone and its monitor and what we got back
later, some images of orange stripes , but a black circle around one spot
, a very slight whiteness to the yellow , meaning something hotter than normal.
I have a FLIR thermal imaging static camera that is high resolution and
we could on the ground to that panel. Immediately we could see what the
problem was . A big wody weed had been growing to the south of the
panel and it cast a shadow permanently across the panel, producing an
area of high resistance a hot-spot . Diagonally opposite that
hotspot on that panel was also misbehaving , the reason for that
extra spot unknown at this time.
There were about 3 other instances of hotspots, those down to
bird poo, big splodges. Even if it rains it does not wash off
easily. So we make sure we cut the weeds down in front
of the panels , about twice a year keeps that under control.
We wash all the panels to make sure there is no big build up of detritus.
Such a wash once a year is adequte for this. Cleaning with de-mineralised
water , doing it about June because the birds are active Aptil/May
and June means its hot enough for the panels to dry quickly.
So we spotted 4 panels out of 9372 that were faulty, we have some
spares about 18 as part of the package of the setup.
Unplugged them and replaced them , The old panels wer estill
working but not fully .
In terms of maintainence we have 25 year warranty on the
actual solar panels. On the framework also 25 years, galvanised
steel called magizinc . The inverters are a 10year warranty.
We had a number of issues in the fisst year where pretty much
each inverter had a failure of some sort. The manufacturer could not
slide on that and they had to fix them . In the end they did a complete overhaul
of all 64 inverters and no major problems since, just an odd
minor problem we could sort ourselves.
We know if an inverter has gone wrong because we have
remote monitoring. A satellite dish and comms to our monitoring
centre and they tell us on our pcs at home. We also have an
operational maintainence contractor who monitors .
There is aproject with Soton Uni who are going to also use a
thermal imager on a drone and go down to string-level monitoring.
Which would give us an enormous amount of detail.
We have enough info as it stands , to make sure we don;t
have any major operational glitches.
Q: Who made the panels
A company called Qcells , originally Germany, moving to
Poland and ownership transfering to S Korea , called Hanwai.
Q: What would you say was the minimum amount of land for a
solar farm? About 0.5MW so 2 to 3 acres. It doesn't have to be
on the ground, it could be on a warehouse roof. Many of the big logistics
warehouses for supermarkets have panels on their roofs , so are
Q: Railways have lots of embankments and it occurs to me you
could put loads of panels along them?
There are 6 test sites being develooped at the moment to
supply power to the third rail of Southwest trains. Its DC
to DC slthough the first ones will be DC via inverter to
AC for a cable alongside the tracks and then inverters back
to DC at certain places. One is at the oil depot north of
Soton . That railway project is being done with Imperial Colledge.
Not only operating the site, but some other activities.
I'm heavily involved with school visits, 35 school groups attended,
800 kids over 4 years. You do need a toilet , but the first toilet
got blown over in the gales. We put in a lottery grant to put in a
composting toilet . Alittle wooden building , all selfcontained.
It doesn't smell, doesnt use power or use water , just a hand sanitizer on the
A visual aid for the kids, a single panel driving a train set round a track, until you
place a kid in the sunpath. An open day for the local dignitories.
2015 was the big tory hit on solar panels and renewable energy .
We had BBC south along and with them were 2 people from
Cambodia, so they could go back to Cambodia and vreatea
solar farm themselves.
Biodiversity management. We had Hampshire & IoW wildlife trust
come and assist us with the early planning and providing the
seed for seeding the whole field after construction was over and they
continually monitor its progression. Its a field in recovery,
its been dug up , filled with rubbish, covered with grass .
We lost 3% of the grass to the solar farm , just 3%.
When they dug out the roadway , they produced soil heaps.
They were going to spread that soil around the field.
When the constructors were there they had doubled up
portacabins for themselves , but gave an excellent view over the site.
So we got them to move all the soil heaps into one spot
for a veiwing platform. So for next to nothing , we got a veiwing
platform with a ramp for disabled carriages.
An important group of unpaid volunteers were involved transformong
the site. Twice a year a flock of sheep are brought onto the site
to eat the growth and dropping fertilise the soil as well. Rich patches of
flora were the droppings had been. Its been pretty fast generating good
growth. These are the mowers twice a year, most of the mowing.
We no longer bring in mechanical mowers , just a strimmer to take out
tall weeds in the front of the panels. The height of the panels is high enough for a
sheep to get under.
Q: Do the sheep or other animals interfere with hte cables?
Around the inverters we put in sheep fencing so they cant get to the
cables. We had a detailed reveiw of where sheep could get to around the
Also via the Wildlife Trust a Malaise Trap? concerning insects
and how they are coming back.
When we first came to the field in 2013 it was Springtime , but silent,
no insects or birds heard. We didn't know it then but no
insects in the soil either. When w edug holes for the trees, no
insects. Some insects on the patches of original soil but
where the grass seeds had been strewn on the new soil, nothing, very odd. We are now getting
good levels of insects back on the solar farm. We have reptile mats
around the site and the first snake, a slow worm . We're delighted how its
Q: Any issues with being near the sea, from salt encrusting etc.?
We do. Everything has been specified as materials to cope with a marine
environment, so the extra salinity . The matalwork of the frames
is made for a marine environment.
Q: Sea breezes covering the glass with salt, or pitting of the glass ?
Not a big issue
What could happen in the future . When it was built , things have
changed even since then. There is a lot of tech around
for remotely monitoring and making th grid more efficient.
What was National grid switching power stations on and
off is now being devolved locally. The distribution netweork
operator, SSE for us, they are now doing National Grid type
stuff. Handling smallrt assets like solar farms and batteries etc.
Small operators like ourselves can be now involved with what was
traditionally big-time operators onl;y. Hopefully thos sort
of tech will be of more use to the solar farm. People often think batteries will
be important , so solar power can be used at night.
That will not be the use of batteries though. A graph of
generation and consumptin, which has to be equal as there is no
storage in between. Whatever is generated is being used.
The grid does the magic of balancing the two,
Night time consumption in the graph is very low , it goes from 25
to 40GW over about 2 hours in the morning and the grid balance that
by switching on a lot of generators. THe red line is fossil fuels,
mainly gas, blue line is nuclear and green is renewables and the
other line is omport from France and Holland soon to be expanded
to include Germany, Iceland and Denmark.
The reneawables is a small part of the graph on that particular day.
The chance of having surplus renewable is a long way off, so
batteries used for storage wouldn't be of use for a long time ahead.
The gas usage very closely tracks the overall generation.
The grid uses the gas as the main control as they are easy to
start and stop. Nuclear its a matter of keeping running all
the time and renewables always have a priority as they are C free.
The same sort of plot but from the last 5 years.The demand has been
going down and been going down for many years. The green is going
up and hopefully that will go higher. The gas generation
follows the consumption almost precisely, because its used as a control.
The electricity market is horrendously complex. The price of electrity
is always changing through the day. So 7am and in the evening
is when most power is required and is when the prices are highest.
At night the prices are very low. The Grid would prefer you to
be using electricity at night as its cheapest then. Yhey have all that
generation capacity and its not used at night.
So gas stations are only switched on for a few hours each day.
The grid would like to levelise the graph. This large difference in
demand means they have to have enough generation sets
for the peaks and all the cables in the distribution network
has to be thick as its taking all that power , but only for a
few hours a day. If they could levelise the plot, then
fewer generator sets required, kept running longer and would not
need such thick cables. Brittain seems to be the only country where its
not standard to have domestic nighttime/daytinme dual tarrifs. Al lmy continental
friends are in the habit of switching on washing machines over noght.
But in UK industry dual tarrifs is common.
Q: If you manage to flatten that curve , would that mean the
price of electricity would come down for customers and the grid?
It would be more efficient and should be cheaper, less generators
need building and less metal infrastructure. The basis of why the smart
grid is coming about. If renewables were not on the scene
this exercise would still be coming through. Batteries would still
be important if there was no wind or solar power.
A perfect day solar output on the farm , showing what happens with the
64 inverters. They individually switch off when they reach their
personeal limit. So energy is lost in that is not generated at the peak time.
Even though we have a 2MW connection , with 64 inverters at 30KW
max each and some efficiency loss in the transformer.
We have that connection for 2MW and we only use that max capacity for a couple
of hours a day. That connection is one of the farms biggest assets.
With such a connection battery would be an obvious extention to
Q: Could the grid give you extra capacity to go over 2MW?
Yes but would require extra cabling, the extra generation
would not cover the additional cabling cost.
In our geographical area , to take out all the possible power of
2.6MW , if we filled the whole field with panels , something like
2 million GBP to reinforce just to take 2.6MW.
A physical limitation, we cannot put more power up the cables that
are already there. We are lucky they had locally expanded some of the
grid , not knowing we were coming, but it did allow us to put
2MW on to the grid. With 2million type figures it would
have to be in a very central place to benefit lots of people.
We could take that surplus power and put it into a battery.
But that possibility would be so infrequent that it would
probably be too expensive , not realistic.
If you overlay prices with our sort of generation , our key generation
is when prices are low. Another reason we could use battery is
to charge the battery with the power that occurs at midday,
then release it at night when the prices are high. That time is when the grid would like
you to be releasing it, everything is driven by price. The environmental
benefit is quite high from storing the power and delay, allowing the
grid to flatten out their curve.
Q: Have you done a feasibility study of this?
Only roughly but the figures don't really add up , but they probably will
do in the near future. What would work is if we buy in
electricity at the low price. We can probably predict how much solar power
we will produce the next day, buy in power cheap, top up with our
solar and then sell it on at the higher price. All very business orientated.
With the side benefit of helping the grid with their aim of
straight line balancing. If that sort of operation occured we would
probably not be running it ourselves but as part of a group.
Such relatively small operations such as ours, its starting to
become feasible that we could expand to these sorts of projects and
be included inthe local grid , assisting the national grid.
We will keep our ears to the ground , its something that is likely
The big tesla battery in Oz. Its a bit chating, connected to the oz grid
doing a power back-up for a big wind farm. On one occassion a
coal-powered power station dropped out over a matter of a second or
so. On the oz grid they have coal fired generators on standby and
turning, burning energy but not actually doing anything, until
required. They should provide power within 6 seconds. But the
Tesla battery that is 1000 miles away, detected that the power was off
and started injecting power within milliseconds. This stopped
the grid frequency from going low. Then the coal fired back up
could take over , bringing the frequency back. Tesla were/not?
contracted to do this. This is the sort of service that
batteries are used for in the UK, stopgap services for a few seconds.
The main use as it stands at the moment.
Peer to peer trading. We're not one of the big six but some pilot projects are
going ahead. To sell tiny amounts of electricity on the market.
We'll pretend the co-op are running an eergy market
and an individual buying power . Instead of going to 1 supplier like SSE
they would have the option of buying from me. I have 16 solar panels
on my roof and in the summer quite a lot of surplus, in the winter almost
none. But I could have a deal with a friend of mine and say that any surplus he
can have for 10p . That would not be enough to supply him but he
could have another small supplier and buy their power for 11p .
Keep going thru all your friends and organisations that provide small
amounts of power. So they have a list of friend-suppliers and prices.
Beyond that there would be a large supplier like Ecotricity or SSE
as your back-up, as relatively unlimited power from them.
This is likely to happen within a few years.
So the likes of West Solent power could deliver power directly to its
members and organised by someone like say the co-op.
The Co-op would process all the data, find the power I was generating
, how much my friend was buying off me and go thru the list
of participants in the system. Feed-in tariffs are going next march
but such a sytem as this would make it more feasible for people to
continue installing solar . Most domestic generators use only about
20% of the power they generate. So it would be possible to have
viable non-subsidised domestic solar .
How much of the grid do you think we use to supply our electricity?
How far does it go , how near is the consumption to the generation.?
Its about a mile and a half. It goes down the local distribution
networks and does not touch the National Grid and do not incur
the national grid costs.
The SSE network control at Portsmouth for this part of Britain,
the Scottish bit is controlled in Scotland . They could show me
where our solar farm was and its right on the end the line ,
geographically at the end of the lane in Pennington
and the sea shore is about 1/4 mile from us. Our name appears in small letters on their
control board. So we know all
the power is going back up , into Lymington.
So middle of June, no heating on, people don't want a hot dinner,
how far would our power get back into the SSE network , and they
said it would probably go all the way back up the west side of
Lymington , as its split into 2 halves, and probably get as far as the
back of Lymington Hospital. So we could confidently say that most
of Pennington was cooking lunch on our solar in June , as cups of tea only
What is the return on investment?
We've been paying around about 5% a year since we started.
On balancing the grid and trying to smoothe out peaks and troughs.
Will balancing with batteries and more renewables , will we
reduce our carbon emissions overall?
There is a lot of power lost with the gas-turbines stating and stopping .
So at least savings there, with the big generators running longer.
So good for everyone all round.
When is your next open day?
Something like 05 July
Your site is it dead flat or is it slightly tipped to the south?
Seeing it from the IoW it did not look flat,
but on site you cannot tell by eye.
A slight rise to the north. We had a topographical survey and I
think 30ft is the maximum height difference from south to north,
more than I'd though by looking at it.
We also did some test drilling to see wjhat was under the surface,
to see what had been dumped there and it was just construction waste.
And no subsidence to disturb the platform registration,
it must have been well compacted on delivery.
How many years before paying back to the initial funding?
About 25 years is the expected lifespan of the farm, probably
about 12 years. We don#t know what the technological
developements in panels will be . In 20 years they could be
astounding output. We have a permanent planning permission,
so not limited to 25 years , so we could do that.
I'm aware that solar PV panels degrade over time, have you picked up
any of that over the 4 years.?
Meant to be about 0.4% per year but
we've not spotted it yet.
You do have an insolation meter so you know how much sunlight
is hitting the site?
THe pyrometers . The old system was a globe magnifying glass
burning a hole or a trace in a piece of card. Now done electronically
, a metal plate under a glass dome . The metal 0plate is very thin #
, absolutely even thickness and the temp of the plate correlates to the
amount of sunshine . I think it was about 2,000 GBP each,
we've 2 of them in case 1 was on the blink.
I was wondering if that degradation over time could be factored in
as part of the 2 and 2.4MW difference.?
I think its early days . I think they guaranteed 90% over 10 years
, we will see.
Do you have any security problems from thieves or vandals?
We had vandalism on the composting toilet . It looks like a dog got in
burrowing underneath and could not get out, then a human
battered there way in to get the dog and caused a lot of damage.
Very bizarre, it happened 3 times over a year.
24 hour CCTV monitoring . Loads of gates to get thru , a house
right next to it.
Tuesday 20 November 2018 , Prof Douglas Connelly (NOC): CO2 sequestration .
I'll be talking of 2 projects mainly, STEM CCS and the Sensation project
completed last year , to develop various sensors to allow the enviroment to be
sensed at greater temporal and spatial resolution.
The curve of ever greater increasing concentrations of CO2 in the
atmosphere and the full Kieling curve tied to a temperature increase.
As far as I'm concerned the data is irrefutable. For nations emissions , its
dominated by China . China has now made its own commitments
under the Paris Convention. The focus should shift to the per capita
emissions which would show where the greatest gains are for
change of behaviour. However China is building about 1 coal fired
power station every 3 weeks at the moment. CCS is a way of mitigating that.
It can allow negative emission, everything else is usually a zero-sum gain.
Grow crops or trees involving CO2 but it will eventually
return to the system. Most of our CO2 comes from energy.
We could of course switch to other energy sources that don't
release CO2 or less CO2. There is an economic pkasticity here,
we could switch to nuclear, with its own potential nasty emissions.
Other sources are transport , agriculture and things like cement
and steel manufacture . There ar eways of creating steel without
creating CO2 but making cement always creates CO2 via the slaked lime
that drives off CO2 , no way of avoiding that. Cement use is higher in
Europe than mostg places. We're away from 1960s brutalist
architecture and love of cement but building in Spain and Italy
is mainly involving cement.
THe latest figures from DEFRA for Test Valley and the
emissios per person. Domestic commercial and industry are about the
same. Transport is the big one in terms of emissions per person.
They've been dropping nicely . You can see the effect of the
recent financial crisis 2008-2012, CO2 emissions dropped globally then
as well. Hampshire is one of the better counties in the UK
So for CCS how to capture your CO2. Its easy at a power plant
all the CO2 sources are in 1 place. One demo project was to be at the
Dom Valley , where there was cement works , steel works
and power plants. They were all going to pipe their CO2 to a central
point to be liquified and shipped off-shore. You can then
dump it into mine shafts or use it for fracking which has arguments
against as well. You can put into old coal seems . Or the off-shore
submarine environment which is where the NOC comes in.
The largest on-shore project was Capsin? project
in Germany. But about 20 years ago there was alake in The Cameroons
which turned over CO2 and killed everything and everyone in the
vally below. The German press managed to turn the whole
community against doing on-shore CCS. So for Europe its all gone off-shore.
After capture there is transport ,storage ,leakage ?, compaction
and implimentation . We know the science of capture ,its well
understood , generally using amines , capturing CO2 in a chemical
form . It uses enrgy for this , so a mass-balance energy equation required ,
energy-in, energy-out. Similat to producing photovoltaics ,other
sytems are involved, and resourse use analysis.
In the UK we're pretty good at transporting gases as we have
a national grid system for gas. A lot of other countries rely on
transport of cylinders of gas as a supply. We also have a lot
of experience of dealing with the NorthSea and gas lines.
We are looking at piping CO2 out and injecting it under the
sea-bed. We have 2 types of storage areas. ? oil type reservoirs
wiht a cap-rock or salt-dome that is impermeable .
THere ar eloads of old depleted gas and oil reservoirs under the North sea.
The oil companies are very aware of the volume and the extent of
their fields , for when anyone starts injecting in there .
After 30 years of extracting oil and gas , they are underpressured.
Active oil reserves can be over-pressured to only 2% over
background, as small as that. Go above that and they might
break the cap-seal and create a new channel for it to leak out.
For the likes of the GoldenEye , we are aware of existing
fractures. Normally abandoned test wells . There was oil there
for millions of years ,if there was any natural cracks or fissures, the oil
would have gone. So we come along and can extract means its a
reasonably contained system. With about 200 oil fields and roughly 250
sq km of sea-bed to look at.
The other storage type is saline aquifers, example Equinor ? a
Norwegan owned 51% state owned. They draw gas from a deep
gas well, separate the CO2 from the gas and inject it back
into a saline aquifer, a lake of brine in effect. That reservoir is at
2300m and the saline lake about 800m below sea floor.
They've been doing that for 15 years and 1 million tons of CO2
a year, its proven technology. Economically it works in
Norway as they have a carbon tax, the only place in Europe that
does. For every ton of CO2 a company stores they cget 200,000
Euros in tax credits back . To develop the Sleitner? field cost them
50 million Euros and paid for itself in 6 years. The down-side is
you can't sell natural gas if its got mor ethan 4% CO2 in it as it
won't burn. They would have built some sort of facility to
separate out the CO2. Normally thwy would ship it ashore ,
separate it out and release to the atmosphere.
So something very positive here. The other environment
you see is in the USA. There for a long time they've done
Enhanced Oil Recovery, using waste CO2 and pump
it back into the oil and gas field to squeeze out more oil and gas.
A sort of piggy-back technology because they will automatically
get a profit from the process. They're problem now is they have no
CO2 plants and it costs them a fortune to buy-in CO2 to put in the
ground. They only have about 2% loss for the CO2 going thru
the system, because it costs them a fortune ant they don't want to lose
any of the cO2. For the North Sea , oil condensate response to using
CO2 for a displacement gradient ???.
There is a problem of knowing the storage capacity that you would
have . Globally we think there is 2 Trillion tons of storage
capacity . For the gas and oil potential storage , in the UK ,
there is 7.3 Gtons and 70GT in aquafers. The problem with aquifers
you're not so sure where they go. If you pump in CO2
there is a massive surface of sea-bed to monitor. So Sleitner that uses the
Absira? formation of sand filled with brine there is 1400 sqaur km
of sea-bed . So for containment assurance you have to
demonstrate you are securing ove rthat whole area. The law of the
moment , we are under UNCOS and the London Dumping Convention .
If the UK was to operate carbon capture and storage off shore,
the operator such as Shell of GoldenEye would have to monitor
after 1 million tons of CO2 a year planned for the next 30 years,
they would have to monitor it for 50 years after they stop storing
CO2 or before if the regulatory body went with the Crown Estate
and if they were happy then it would go back to the country.
This was to make it business-wise more viable , considering the
time-scale and having to return for 50 years with nothing changing
after 20 years. They also had to change the London Dumping Laws.
At the moment you can't put CO2 under the seabed in the UK sector
or International sectors because it is dum[ing and you can't
dump waste at sea. You a;spo can't ship it across boarders.
Scotland sees this as a massive industry , having most of the UK
storage capacity. They would store other people's CO2 but as soon
as you start shiopping it across borders, you'r shipping waste.
This arose out of nuclear waste transportation, the trains
that went across Europe containing nuclear waste, and going into limbo
forever. Both those pieces of legislation are being adressed as I speak,
to allow some CCS. It is one of the few negative emissions technologies that
works, not working economically at the moment, as we have npo
C tax system. The STEM CCs project was funded by the EEC ,
we got 50.9 million co-ordinated at the NOC
with Shell and other partners mainly academis and research enviroonments
. Out of the partners the UK contingent gets 10.5 million , so a big
project for the UK science community.
With a previous project with EquiNor they wer enot covered by
full legal protection as not a partner. Shell is now a full legal partner
and they can share data with us. They've given us just over 6 milion GBP
of raw data , because they are covered by this consortium agreement.
So no partner in the project will go and sell it to someone else.
Full seismic surveys of the North Sea is worht a lot of money.
It was the first such project where a major company got involved
as a full partner. At one of the meetings I asked of Shell what would
Shell actually pay for. The response with regards to CCs was cheaper
baseline , they have large areas of seabed how would they monitor
it and quantification . So if there isa leak and its proven CO2
is leaking , how would you quantify it. At the monment , any leakage,
negates tax credits . The new legislation going thru now would
be pro-rata. If you contain 90% of the CO2 then 90% of the tax
credit. A problem with the North Sea , its a "layer cake".
Deep gas reservoirs, shallow oil reservoirs , owned by
different companies and if they all start operating CO2 stores
and there is aleak at point A , what if 4 or 5 reservoirs in
that area. So we are looking at tracing the CO2 and what you could
add to give it a signature for each reservoir.
We're working at the GoldenEye platform. Shell proposed the
5 million CCS demonstration project . Its 140 sq km , where the original;
oil/water interface was. Most of its economically recoverable condensates
removed and underpressured by about 3%, so plenty of
capacity and its got a pipeline. The pipe runs to PeterHead terminal. The
plan was to build a power station with Scottish & Southern Energy
along with Shell. They would build a 25km spur pipe to join the
main pipe to ship CO2 back out toi the platform. Some engineering
needed sorting - CO2 is an acidic liquid , quite corrosive . Oil
and gas pipelines are soft steel and would be eaten thru , so a liner
would be required the full 140km of pipe. All the injection
points on the platform would have to be also changed. for demonstration
purposes that would all be worth it. Its about 120m deep in the
North Sea. They have a large amount of info on the environment that
is there. Most looks like a ploughed field due to trawler activity.
Trawlers should not go within 1km of the platform , so our
test sire is between .5 and 1km from the platform.
The platform is unmanned which causes technical problems.
The are issues around the monitoring, the scale of it is huge and what
do we measure. Measuring CO2 is doable but lots of other things in the
enviroment produce CO2 . Everything that dies at sea, drops down to the
seabed and rots down producing CO2. Where do you monitor, all
of it, taking a ship out at 25,000 GBP a day and do 2 or 3 sites a day.
Then you miss the temporal scale , how often. How to fund it in a
sustainable way is one of the big issues. No point in storing CO2
if we're using more in resources and fuel . Fishermen are interested
in what we would be doing to the sea-bed and knowing we were not
kil;ling off fish. The legislators would have guidlines of measuring
CO2 and Ph things that would/could indicate leakage.
Then can we measure it . Looking at the planning for off-shore
wind-farms and they say they must measure everything.
Some of the things they dictate, we can't measure, certainly not
in a cost-effective way. Then how much woud it cost.
So we've come down ot a set of essential ocean variables ,
things that would change relating to introduction of CO2,
CO2, Ph, alkalinity , nutrients, oxygen , conductivity
and temperature and depth. Oxygen gives what sort of
biological impact may be occuring, similar for nutrients for the
biology. If you start storing things under the sea-bed
and it comes to the surface. As it comes to the surface it will
push out other things. The seabed of the North Sea is quite
anoxic , black ,slimey full of sulphur, nitrites, ammonia
and the reduced nutrient species. So looking at nutrients will
give an idea of precursors.
To test all this we will deliberately release about 3 tons of CO2
about 2-3m under the seabed of thr North Sea. Cylinders on the seafloor.
Then throw in every tool in the box we have. AUV systems,
lander systems, active and passive acoustics, ship-board
systems , an ROV. Test everything and see what does work.
We want to produce a combination of bubbles and dissolved
specie. We know we can detect bubble svery easily.
But if your reservour is 2.5km down , like the GoldenEye,
the chances ar eif it comes up it will dissolve as under
pressure. The first species we will see will probably be
dissolved. Relying totally on bubbles would be easy ,
its how fish finders work, but we may miss the real killer stuff.
CO2 dissolved in water is dense , sits on the sea-bed, which is where
most of the life is. It could create a very small layer of
suffocating CO2 saturated fluid there. So we need to be close to the
seabed to look at things as well. So a series of landers placed there.
We leave Soton 27 April and starting in |May with the James Clark
Ross and the MarieAnn a German ship. Taking an AUV and an ROV.
There is a lander out there at the moment on the North Sea seabed,
been there for a year now, collecting data. For the baseline data .
I was stunned that we have no temperature data or even data on currents
for the bottom of the North sea. Plenty of surface temp data ,
but nothing for the deep of the North Sea and its only 120m deep.
The lander has a suite of sensors that includes chemistry as well as currents
and temps, to find out what it is now. The big fear for CCS operators,
same with Japanese partners Tomakomi? , is a false positive.
Tomakomi is a coastalarea off Japan , storing CO2 into a depleted
oil reservoir . Very good at getting Japanese fishermen
, very powerful in Japan, came on board. They injected for a year and
a half . A year measuring the Ph of the water at depth so they f=had the
range over the season . They started injecting and 3 months later
the Ph went too low . The government stepped in and closed it down
at vast cost. It turned out , comparing to the reference bay
next door , it was purely a natural thing. A strong bloom
of phytoplankton , that sank to the bottom , rotted and changed
the Ph. In there previous 1 year of data, there had not been a megabloom.
That Japanese site is back operating again but relatively small scale.
In Europe 2 projects in gestation. The Statoil Sleitner field been
operating a long time , Rhode? a Norwgan one then hopefully the
Peter Head / GoldenEye one eventually.
On our monitoring site will be acoustic systems, video , photographic.
With an AUV at 1.2m per second running pictures at 60 Hz
, huge amount of data, with blurry bits of nature. So trying
machine learning . Its called Beagle . From the photos, some placed
on-line for the biological community to identify them , crowd-sourcing.
Now known ,that pic is marked and eventually all the videos
could be processed automatically thru the system , highlighting
named crreatures. Ones it could not identify then being put out to
the community. We're running out of PhD students to look at
20 million pictures. Similar could be used for forest coverage ,
land-use change etc. We'll be using sea gliders , AUVs drifters and
floats . The argo system of 4,200 floats out there, monitoring the
top 2,000m of water rising and falling every day thru the
water column and global coverage.
The recent question concerning global-warming having stopped, ehere
did the heat go. The argo system showed the heat went into the
deeper ocean, it was an odd year , the heat is still there ,
no hyatus in global warming , it just went elsewhere.
New sensor technologies and the drivers behind that is it must
be light , reliable , constrained metrology requiring not having to
do traditional water sampling alongside for the next decade
until you believe it. Got to be able to do long term deployment
, be sensitive and cheap. The SenseOcean project a 6 million Euro
project we also hosted was aimed at creating these.
A lot of it was done by the Ocean technology group at the NOC.
A top-down approach , we wanted to make it cheap by using hte
same comms , same energy, same data management.
Using different technologies so chrometric , optical, acoustic ,
UV light. Not a beauty contest, we wanted the best tech for each
individual environment. Measured nitrates in 2 different ways, measured
CO2 by 4 different ways. We ended up with a set of kit that would
work in different environments. So opto-sensors, lab-on-chip sensors ,
multi-Ph sensor sytem a world-first into KielFjord in the Baltic.
The electro-chemical system went awry relative to the others.
Nutrient sensors , to bonb-proof test for that we set it on a
glider and sent it to sea , to see if it worked. We were also taking
bottle samples , nitrates relying with the chlorophyls, the plants are
growing , putting down the nitrates and looks coherent with the
real world. We entered our sensors into the Nutrient Challenge
in the US and we came second, facing some massive companies in this
field. We didn't come first because the component that held our
batteries , leaked .
3 weeks ago the world's first deployment of our alkalinity sensor.
40 hours at 120m of measurements with reference materials and samples.
2 micro-mole , do it manually and you are lucky to get 2 micromole .
People want this sort of tech. The aim is to show we can look at the
environment in a way that people can believe and say CCS can
be done. There is still some political animosity towards this .
Greenpeace hate CCS because it allows business as usual
for the hydro-carbon companies. at the moment an oil company
is releasing 100% CO2 we produce. So even if we lock it away
for 1000 years , not 10,000 years of our original; model,
we're still giving ourselves a bit of a stopgap.
Its one of very few negative emission processes available.
Its the only way at the moment to make cement and steel
without producing CO2.
Donald Trump doesn't believe in global warming and also
believes in the coal industry . In the US just announced a
600 million USD project of CCS ? coal ?. Concerning
capturing CO2 from coal power stations and putting it into an
off-shore environment., Its not abou the economics
of storing CO2 , but allowing to ? CO2. In a way Greepeace
have apoint . So a lot of US colleagues are inserting
coal instead of oil , or coal instead of cement to get funding.
I was surprised when the Trump government said they'd support CCS.
Whether it is just to appease the coal-mining industry, it will be useful for
everyone. Norway has a new project , collecting CO2 from steel
and cement plants near Sweden , taking by ship to
the North Sea where they have a floating platform to inject via
a floating pipeline. Remember Deepwater Horizon, floating pipelines
can be challenging when anchored to the seabed and a structure at the
surface that is moving a lot. They're doing it to demonstrate it can
We are moving towards Carbon Credit. The uK is talking more about
CCS after 2017 budget when a demonstration project got
axed. A 6-7 million project from that is keeping it alive in the UK.
The scottish government are the biggest proponents of this as they can see
it will be a big industry there.
We have great tech knowhow of using off-shore facilities and resources,
Scotland having the best off-shore engineering on the planet.
So an ideal place to demonstrate it , even if on a smal scale.
The GoldenEye platform is sitting there idle a tthe moment . Shell
is saying they would decommission it unless the UK government
comes back to the table to discuss potential future CCS.
It woul;d be the best demo structure as all the infrastructure is alreadt
there. Once it is decommissioned another would not be built.
So use it or lose . The Sense-Ocean project was one of NOC most
successful projects ever done, in the 4 years, 6 patents
and 3 successful products on market. One of them was with a partner
making a Nitrous Oxide detector , NO is a greenhouse gas but there's not
much of it in oceans. A Danish company developing it , they have adopted the
new Water Framework? Directive and under Danish law they have to
monitor NO at all their sewage plants . They sold 2000 units in 1 month
for that, great timing. You cannot put one of those units in the
deep sea, or place off-shore for a long time , on land sewage plants are fine,
a bit of serendipity there.
Video animatin of the testbed. Loads of people will dril l you a hole if
its 2000m , they won't do it small and shallow . We wnt to Cellular Robotics
, now in Aberdeen as wel las Canada. They came up with a solution
, a set of roller guides on a curve to push a curved pipe into the sea floor,
mounted on a robotic frame. The problem at the moment is what to do
with the 20m long circular injection pipe, on the ship.
All conducted by an optical and hydraulic cable from the ship.
240Kg of CO2 a day to be released, we have to know roughly
where it should emerge , as all our kit will have to be deployed on the
sea-bed an area of about 1m in diameter. They've looked at every
connector in the book to see how to do this sort of operation
under water. The diffuser on the end is permeable steel.
They loved the challenge. The Australians want to buy this system
when its done, for their off-shore CCS.
What can you do to mitigate the effects of seismic activity ?
The Sleitner field , of Equinor have been monitoring seimicity.
Statoil could not do what they did previously, because they did
very little monitoring apart from micro-seismic sensors
and 4D seismic surveys from ships. They could model the reservoir and they
could see when it dissolves in the brine it change3s the seismic resonance
in the reservoir and could see where the plumre has spread.
Its behaving resonably well according to the models. They also
monitored the injection pressure. Their argument was that if the
injection pressure has not changed , meaning its blown out
somewhere else. 17 million tons of CO2 is now stored in
there. Its reasonably conforming to what the model showed.
No seaismic changes, but they did get a sleight change in
gravity on their gravimeter. I don't know the cause of that.
CCS is different to fracking , in that the reservoirs start at
under-pressure. If you create any type of fracturing then you
have a real problem for CCS. GoldenEye has 2 caprocks,
one of the reasons it was chosen for demoing. One caprock just over the
top of the original; oil and gas field and another on 800m above.
There are 7 abandoned wells however , so those sites will be
monitored as part of the baseline work. The idea of having 2
ships so we can split into 2 separated teams.
Any leakage will not be from cracking or fractured rocks but most likely
come up one of the abandoned wells.
There are features in the North Sea called seismic chimneys , a lot
of debate about them. Some think they are seimic artefacts from the
surveying, some of them are. There are a lot of old river beds at the bottom
and can cause reflections . These pre-glacial time features have filled up
with tilth. The Meioscene then got laid down again. Some get inverted
as a reflection in the surveying process. Some are old river beds that look
like chimneys , some aren't. We've just dome a drilling in the North
sae to a pock-marked area about the size of this room on the
sea bed. You can see them in the seismic results but no one
knows what they're made of. The surface expression is litterally
pock-marks and they leak methane. We think its leakage from
an old natural gas reservoir , as it leaks it produces Calcium Carbonate
and the chimneys would be Calcuim Carbonate.
With the BGS we did a 25m drill into one of these, its lots of
carbonate, chunks of it and sediment. The concern for CCS
is calcium carbonate dissolves easily in CO2 , so any
such chimneys near a reservoir , it will fizz away to the surface.
We're now dating the collected material , if its tied to the
original Meiocene period then ?
Is there any way of capturing CO2 in solid form?
The shallow depths of the Noth Sea you dont get the gas hydrates.
If you have high pressure and/or low temp you can have a pathrate?
ice basically, water molecules trpapped in methane, CO2 does the same thing but
much deeper . So you have to be in deep ocean or cold ocean.
It cant be in deep sediments as warm. Most reservoirs are 45 to 50
deg C. GoldenEye is at 45 deg. For our test bed , we have a big
cylinder. Originally we were to lower down pallets of CO2 cylinders,
but no one would sell them to us . The waste connotation and no one
wanted to be associated with anything that went wrong.
So we had to commission a container size cylinder and having to
heat the valve port . As you let CO2 out of the liquid
there is adiabatic cooling and freezing, so alot of energy in
batteries on the rig to keep it warm. The background
seawater is 7 to 10 deg. When its out it should be ok.
Lower the container on the seabed , leave it for 24 hours while doing the
drilling , then the pipe to the ROV will be 100m away.
One reason for the sitance is dependent on where the current is coming
from , the kit on the seabed could create a shaddow and we want to
see how the gas behaves normally. We have to detect dissolved and
bubbles and quantify it. So a nice acoustic system , with multiple
channels its possible to identify the gas from the bubble noises, a side project.
With the sensor and analysis technology and autonomous gliders , is there
anyone who's managed to emulate the shark and its stereo sense of
smell , that could direct a rover to source?
We have a whole project at the uni on that, a developement
of AI. There is a sytem with the USA called Sentry , ion/iron?
sensor measuring dissolved iron/ion. The data went back to the
brain on the AUV. Starts like mowing the lawn but if it found something
it would spin around and slowly try to focus in.
If it sniffed somewthing above normal, it would continue until it
got weaker , turn and then operate like ants do .
It was watchiong how ants find things, with set angles, that
suggested this approach.
So at fisrst anomalous sniff, how far would it have to be?
For 2 micro-mole ?ency , the amount of CO2 we will be injecting,
would probably create 0.001 change in Ph over 3 sq m and after that
spread its dissolved. We are trying to emulate a very slow release from
a reservoir, because that is what the regulators will want to see.
If there's a big blowout everyoone knows it. There was a big methane
blowout in the North Sea when BP hit a reservoir , creating bubbles on the
sea surface, you could see from a helicoptor and almost sank the
ship due to loss of bouyancy. Its the small stuff and the dissolving
issue thats the problem. We did a similar experiment injecting at a
Scottish Loch , from the shore, 30 tons of CO2. Bubbles but most dissolved in only
10m of water. The regulators have scientists working for them
so are fully aware of the problems. Wr want to know that SeaSense
works and to be sure when used its not actually doing
damage . Because it could damage remotely, does not make that alright.
You cant off-shore your problems. The density of CO2 is a real
problem. If it is leaking and its leaking in a dissolved
form , it will hug the seabed. Most AUVs wont go below 5m
above floor. We have a project to use a crawler on the
sea-bed ,slow . Every 3m it will inject a sensor and sniff and that
will do away with the boundary layer problem.
The North Sea is very active, anything we put in there will
dissolve and spread awy quickly. There is sensitive biology
communities out there , like fish-breeding sites.
We don't know , with the lander that is currently deployed,
whether its still there , it could have been trawled away by a fisherman.
We built in some trawl resistance , so trawl rollers should roll
over it , but a trawl board weighs 5 tons , and would not survive a direct hit.
Our lander sends off a datapod every 3 months. We've had 2
so far, one washed up on a Norwegan beach off Tromso , in October.
The March release one appeared on a beach in Wick , transmitted
its signal but by the time i could get the local coastguard , it
had been washed out to sea again. A problem there, as the beacon
requirede a glass dome on top allowing Iridium communication ,
they did a 3D printed version of plastic which is porous to
seawater. The plastic got saturated with seawater and did not
reliably transmit. It worked for a while on the beach after it dried out,
then waterlogged again.
There is a backup of a central datastore on the lander itself,
eventually with 1.5 years of data.
The engineers love trying to work out all this remote stuff.
A great learning curve for what we can use again, so the drill rig
Oz already wants. That is Project Gorgon off Perth, onshore
but onshore of an island, with reservoir under the sea bed.
You capture the carbon, how long do you think the C will
stay underground, 100,000 years?
Designed for 10,000 years.
So after 10,000 years it leaks out again?
Yes, well the minimum containment id 10,000 years, based on the
models. If you don't disturb it sesmicaly or aything , it will
only disturb if overpressured by more than 2%. Unless you drill
back into it or an earthquke , the CO2 should be contained for ever.
Something that will help[ contain it is the formation of minerals
with CO2. In Iceland a project where they're injecting CO2
into basalt , forming new basically limestone. They can do that there as they
have fresh basalt , from a volcano next door and very reactive.
In the NorthSea you do get slow mineralisation . In 10,000 years there
will be a combination of liquid CO2 and at the rock would be
minerals that would also act asa a secondary cap.
So from the models a minimum of 10,000 years. Beyond 10,000 years its
not possible to model with any certainty. THose reservoirs held gas for
millions of years, so unless perterbed or brolen in any way
, they should hold in.
So in 10,000 years everone around then will know where they all are
and won't disturb them?
Thereby lies the issue. We may have chilled the Earth by that point and they
may start leaking again. We hope to find ways to reduce CO2 emissions, this
is just a stopgap measure. No one accepts this as being a solution.
At the moment the only chance to halt at 1.5 deg C, this is the only
game in town at the moment and we'd need to start rolling it out
quite quickly. For the UK 2028 is scheduled to have a full scale
operation in the North Sea. The demonstration is aimed to store10 million
tons by 2028, to do full proof of concept.
For you sample sites are there areas that are naturally higher in
Yes, the variability is astounding.
I see you are putting the photos to the biological citizen science, have you
looked at the microbiological community. Microbiologists did a quick
experiment in Scotland ...?.
You have naturally higher CO2 levels?
... centres, slightly low lying , plants rotting tend to coagulate and are
exposed to ... CO2 .. when it rots down.
You have a measure of that biological community , because that could be
a bio-measure , your control which is now and if you introduce your
CO2 , compare that to the naturally high areas and see if that
composition of each community changes?
We'd love to , we dd another experiment where the biologists were
exposing animals like sea urchins to CO2 at increasing concentrations , one dissolved at 20,000
micro-atmospheres. I would not care about sea urchins if there was 20,000
mico-atmos of CO2 in the sea , as humans would not be around to worry,
as everywhere would be too toxic.
THere are some interesting changes with Ph and climate changes because annual
in the surface of the North SEa is about 0.6 Ph units which is a lot
considering we are looking at .02 changes over time. But of course we are
lifting the baseline over time. So the 0.6 is over an increasing baseline
so you will start affecting the biological community. Some biology will
thrive in that, opportunistic species filling the niches. We might create
problems for us, maybe kill a keynote algal species that .. ?.
I was at a Norway talk and fisheries people wer ethere and fish
are now going into the Arctic because with warming climate the
cod are invading the Arctic and they are killing the local
keynote species called Saithe, by out-competing it.
So they want fishermen to go to the arctic to kill the cod
to bring the local balance back. Unintended consequences, like the
school of 90 tuna found off Greenland last year.
If you have calcium and magnesium silicates and a bit of water
around, there is a fair chance that the CO2 will turn it to
Calcium Magnesium Carbonate , locking it up?
This is what excites some of the pure geologists, because it becomes a self-sealing
system and even if it fractures then it would migrate thru mor eof that
Any sort of silicate salts will eventually convert themselves into carbonates
as long as a bit of water around. ?
Which is partly what we think is happening on those chimneys.
We'll know more when we do the full CT tomography , it is a natural
seepage system rather than ours. It gives a lot of promise to > 10,000 years.
Ir there any thoughts on C capture for cars or home boilers?
For cars , use electric . Data out today shows we will make more CO2
by using electric cars , because of the generation, inefficiencies of exchange
and storing. Big hope in hydrogen . A big Scandinavian scheme
built around C capture and utilisation, using it as a
chemical reagent in pharmaceuticals, from pure CO2 . The Dutch use CO2 a lot in their
greenhouses, a good viable product in the Netherlands.
All transport of it is based on that we have 99.9% CO2 , if anything else in
there then other issues with htat. Like S compounds . This only
works when C is actually 90 dollars a ton, CCS works with current
technologies. Getting cheaper as time goes on is the idea.
They think that will come down to 50 once the first 2 projects are off the
ground. Shell is doing a project Quest, in the Canadian region , tied
to tar sands. Tons of CO2 are produced processing tar sands.
Rhat will work economically there, as licenses otherwise would be required
for dealing with the CO2. We don't pay the true cose of CO2 at the
moment, we pay nothing effectively. Factor in climate change and it would
be a lot more than 90 a ton. Twice the EU has tried getting a C
trading scheme off the ground. With the system involving Russia it got loads
of credits and then sold to the polluting industries, so carried on doing what they
were doing and Russia made loads of money. One of those laws
of unintended consequences of economics, someone will always makea
If you're monitorin g these capped areas over 20 years and they are
in areas with the furroughs, will there be a conflict of interest
with the fishing industry , who don't want objects littering the sea bottom?
or have to have sontrols in place?
Yes requires organistion. We do at the moment, our lander thats
currently down there is listed on fishing bulletins . Being close to a rig,
fishermen don't go near to them, as they don't want to
snag their nets , if nothing else. As far as NIB? declare a flight
path . In a 24 hour period we cover 4 to 8 km . The fishermen
are used to moving out of the way of the seismic people,
with long streamers behind them, as they are not turning..
We got buzzed by a helicopter once , they were coming thru and
had not informed us. We've had one question back from
Marine Scotland from a company laying a cable between Peterhead
and Norway . We'd have to confirm its not the 3 weeeks we're
doing our project. But if it is , we'll move somewhere else.
We have a good working relationship with the coastgaurd and similar,
they will help us in that respect. There is a line in the anti-dumping measures
that states tou can do sustainable resource exploitation in the
North SEa , which can cover CCS research.
In Japan it was very necessary to get the fishermen on board at the
outset, because they are the biggest stake-holder. Fisheries also have a nice
existing set of data , probably 50 years of landing data . If something were to
leak , followed by fisheries collapse, a red flag. Pulling the research
pipe out at the end (as large diameter cicle) will be a challenge at the
end of the project. The ROV peopl,e think they could drag it .
When we've finished then a few more core samples , to check we've
not perturbed. We may have downward morement from our pipe.
We will use an arc of pipe , because we know if we use a straight pipe
it will come straight back along the pipe and out at the seabed entry
site. We'll introduce 2 pipes, one for much higher pressure
to definitely produce bubbles for bubble acoustic purposes , if the
low pressure one fails acoustically.
Tuesday 18 December 2018, Dr Clare Eglin, of the Extreme Environment
Laboratory at Portsmouth University.
"Cold water - friend or foe?" - is cold water immersion a dangerous
threat that should be avoided or is it beneficial to health and should
In Portsmouth we are surrounded by water. Anywhere on the south coast is not
far from water and the responses to cold water immersion
are important to us. Historically a lot of swimming and outdoor
activites in and on the water, from the Victorian era on.
At the moment the sea temp is about 12 degrees. The problem emerges mainly in the
spring when we can have high air temps but even 01 May the sea temp is still
very cold. In the North Sea it would be still 5 or 6 degrees. So a severe
challenge if you go into the warter then.
Iou could end up in the water as an unexpected scenario , Ramsgate in
april , someone swept of fthe pier into the water in rough sea.
Venice Canal an overcrowded jetty , and many ended up in the water.
The Herald of Free enterprise when 193 people died. On the other
hand , certain sports , particularly raft racing you are more lokely to
be in the water than out. Someone rescuing a duck in icy water, not a
good idea. This time of year you get people putting on santa
costumes, running into water and thinking it will fdo them some good.
2017 drowning statistics , 277 people drowned , fortunately this
has been decreasing in recent years. Figures for open water, so excluding baths
and tubs, but could involve rivers or lakes as well as sea.
42% are accidental. Most of those are men , most are young
and about 44% never intended to be on the water at all.
They were running or walking , alongside a canal or a shore
and fell or were swept in. Several stages of immersion associated with risk
of drowning in cold water. 4 stages. Initial stage in the first 3 minutes , due to
skin cooling , the cold shock response. Short term respones after that,
about 30 minutes where there is superficial neuro-muscular
cooling , cooling of the muscles and nerves. Only after 30 minutes are
we getting hypothermia being the problem. Historically a lot of
drowning visctims , the cause of death has be ascribed to hypothermia
but unless you have someone who is very small , they will not be
hypothermic until after 30 minutes of immersion. We think its fairly
rare that hypothermia kills. Then there is the circum-rescue collapse ,
when someone collapses immediately before , during or soon after
The Ice-bucket challenge. 2 versions of it, different responese to the
cold water. The cold shock response is due to rapid cooling of the
skin and that is mediated by the brain. When we immerse people in the
lab we see a lot of difference in their responses.
Q: Will you talk about the style that the water is delivered in as that
makes a difference?
Yes. Quite a lot of the ice-bucket challenge is that if you kust put a load
of ice over you , itas not as bad as if it was icy water, as the contact is minimal.
Icy water gets far more contact. The responses from students dunked in cold
water. Heart-rate/ECG trace and arrowed when he went into
15 deg water, so warmer than present coastal water. He has a high
heart rate before he enters. Also shown is breathing pulses , pneumotachoraph
of rate and volume, that resets every 10 Litres , so the steeper the curve, the
greater the volume. Also the CO2 level at the end of each breathe.
On entering , the heart rate increases a bit, but most noticable is the huge
increase in respiration , and a large gasp occuring.
Same with taking a shower and someone else turns on a tap and disturbs
the mixer and the shower goes cold, you have a large gasp.
That hyperventillation is also blowing off the CO2 and if it occurs long enough
you get pins and needls in fingers and toes and problems with blood to the
brain. So it could affect your decision making. With the increased breathing,
it will be difficult to hold your breath at that point.
So for accidental immersions , not expecting to be in the water,
shock and panic on top of this.
So rapid cooling of the skin , that activates the sympathetic
nervous system , gives the gasp and hyperventillation and for
young people that is the main problem, the uncontrolled hyperventilation
and inability to hold breath, with water splashing over you.
You are likely to aspirate the water and you might drown.
The other problem with hte hyperventillation . When swimming
normally you coordinate breathing wiyth your stroke. But if your
breathing rate is now 60 breaths per minute, you cant swim like that.
THen swim failure, no longer can you keep your head above water
and therefore drown. For older people and people with
underlying heart conditions is the sudden increase in heart rate
and increase in blood pressure. We have vaso-constriction of blood
vessles in the skin, returning blood to the core. Putting a lot of
strain on the heart , so any problems with the heart will be exposed then
and so possibly a heart attack, resulting in drtowning.
For young people its probably the respiratory responses are the main
problem. But if you know about this response and you know just to
hold on to the side or onto anything floating , until that response has
died down , after 3 minutes you will get your breathing back
and then swim or whatever .
So how much to drown. THe average person's lung is 4.6 Litres
, the average person takes in on a gasp from cold water immersion
in 15 deg water, is 2.4 Lt. But in sea water it only takes 1.5 Lt
. Just 1 gasp under water or with a wave going over you , that will
be enough to cause drowning. So a small amount can cause drowning
. The mammalian diving Response, very developed in diving mammals.
Formed of apnoia breath hold, brachycardia heart rate slowing,
vaso-restriction , the animal becomes like a heart-lung
machine . This enables oxygen conservation, so diving mammals can go
down for 2 hours with no problem. At the surface seals heart rate will
be 140bpm, let it dive and rate drops to 40bpm. Before heart-rate telemetry
monitors were developed then a wire was attached to them and so could
only do restrained diving. Unrestrained it could go lower to
10bpm, presumably a defensive mechanism as they would not
know when you would let them up again, so they evoked the
maximum diving response. Looking at blood flow, brain ok,
eyes have some for underwater foraging , some ot the lungs
but otherwise little blood flow , so huge oxygen conservation,
enabling the dives. Humans its not so developed . A study we did,
looking at babies to see if they still had a diving response.
It was suggested they do and we loose that as an adult.
2 stimuli cause that diving response, apnoia and face-immersion.
Face inmersion stimulates nerves in our face , trigeneal nerves
activating the parasympathetic nervous system an slows our
heart-rate down. The heart rate gradually slows down .
It takes a while todevelop so if you cant hold your breath for very
long , you tend not to get much slowing of the heart. People who can
hold their breth for 2 or 3 minutes get a very slow heart rate.
When we tested babies, they only hold their breath for about 1 second,
so we saw little brachycardia , and wriggling too much to
see anything on the ECG as it was all over the place.
The only crying from the babies we got was when they had to
come out so i don't think it was distressing to them.
What if we put the cold-shock response that we see with head out
immersion and the diving response, whole body immersion.
Strange things then happen to the ECG. Att he end of breath hold, a
bit of brachycardia , after holding their breath under water,
and then they broke their breath hold , but still under water.
Strange complexes in the ECG and for some individuals
that may then precipitate a heart attack. If you break your
brethhold under water , you may think you will drown. There are
situations where you will do that, eg in underwater helicopter escape.
Anyone who flies in a helicopter over the North sea
or similar large seas will have to undergo helicopter underwater
escape training. Helicopters are top heavy , tending to invert ,
so having to escape from helicopter under water. So a helicopter
mock-up released into a pool and you have to exit thryu a window.
As you can hold your breath fora maximum of about 20 seconds
, in cold water, even with specialised immersion suits and you
have to allow time for the rotors to stop spinning before any escape
is possible. So to escape probably takes about 60 seconds, they have to
provide you with some form of emergency breathing apparatus.
That may be a little SCUBA set or just an empty plastic bag
that you can then breathe into.
So they break their beethhold and then either breathing
from an emergeny device or an air pocket or stas ? system.
For some of the deaths we think its an autonomic conflict.
Cold water immersion and activating the diving response
and cold shock response and both are having op[posite effects
on the heart. The parasympathetic stimulation will slow the heart and the
sympathetic which will increase thge heart rate. The heart doesnt know
whether to speed up or slow down and starts kicking out ectopics.
For the healthy or fit , they are usually a-symptomatic .
All the people we've tested have been asymptomatic because we screen
them before entering cold water. Not the case for someone who
accidently falls into cold water. Some predisposing condions are
eschemic heart disease would be more susceptible.
not al people respond the same. If someone has undertaken
several immersions , their responses are reduced.
Example of the ventillatory response ot being in cold water
and it decreases with repeated immersions. Heather Massey
a colleague at uni does ice-swimming , has swum the channel
there and back in a relay and will be swimming from Ireland to the
UK . She is well adapted to all this. The reponses of Heather and myself
going intop 15 degree water trying to read some text.
I found it rather uncomfortable but Heather did not. The first time we'd
tried speaking during such immersions. Usually we are collecting
expired air .
Q: Did you find that speaking, you were concentrating
on something else, improved your response?
It probably did and that I'd done some immersions
before and I've immersed hundreds of people, so I know what
responses to expect. I have ot say my memory of it was worse
than the reality, I remembered it as being a lot worse
than shown in that video. If I showed you the heartrate response
, for the second immersion you ususlly see a greater response .
Because when you first immerse someone in cold water they
have no idea what will happen . Second time they know whats
happened and remember it as something nasty and so a greater
heartrate response , distress response, the second time.
Q: You had some histograms showing the pattern going down,
will that pattern change if there was say a 3-month gap.?
Does the body get used to it over shorter periods of repeat?
We've done daily immersions and thats enough to reduce the
response , also 2 immersions a day and again reduced. We have crammed all
that into 1 day and get a reduction. Once we've got that reduction
, we then tested them 3,7 and 14 months later and there was stil a
reduction. We don't know if the intervening immersions
were enough to top them up. We didn't do a bunch of immersions
and then wait 14 months. I'm not sure how long you need between
each immersion and not get the adaptation. I would suggest that if
you did it once a week , you micht not get a reduction .
You can give someone psychological skills training and that
is equivalent to some of the adaptation. There is a quite a psychological
component to this.
Q: Would the marines be advised ot do this?
They might be advised to take showers, as this will show a
reduction. Its as much knowing what the response is like
, will help you out as well. Knowing that after 3 minutes you will
be able to cope with it. that is part of the cold-water/winter
After the fisrt 3 minutes we get peripheral cooling.
Muscles and nerves start to decline in function . Look at someone
swimming, they go grom having a flat posture in the water
to becoming more upright , to a point where they can no longer
keep their head above water. Stroke rate increases , stroke length
shortens , increasing uprightness increases the sinking forces
and it becomes a vicious circle, no longer being able to keep their
head above water. On top of that, as they cool, they will be shiverring
, which interferes with swimming and you get the sensation that
you cant feel where your hands and feet are, so no longer doing
a smooth swimming action, more jerky and loosing propero? section
as well. A gradual decline in you body's repsonses. for different temp water
and people swimming for up to 90 minutes, in a flume and 25 deg
is meantt o be thermo-neutral for people swimming. For 4 of us,
core temp decreased , wheras for 6 it increased. Drop the temp to 18 deg
and some people had to be removed. At 10 deg we managed to have 4 people
who managed to complete 1.5 hours swimming in 10 deg.
Those people managed to swim faster , produce more heat and had more subcutaneous fat.
In particular on the arms, seems is able to protect you from such swims.
So a huge variation in the responses of the peole. There were some people who
had their core temp still increasing . For thin individuals , they'll cool
faster , if they swim. Larger people , if they can swim, ?? because of that
insulation from the subcutaneous fat . If you're sitting still , then your muscles also
provide insulation. As soon as you start exercising you loose that
insulation , as well as stirring the water around you.
Does clothing help. At 12 deg , we had thin individuals , with clothing or
swimming costume . In a swimming costume , they could swim for longer
and furthger but we had to pull them out as their core temp fell
too quickly. With clothing , their core temp did not fall as quick
but they got exhausted from the drag of the clothing. They stopped swimming
due to this fatigue. So a trade-off between protecting from hypothermia
but then fatigued too quickly. Going on to the third stage , hypothermia
if no buoyancy aid, this is where we get to before drowning.
For 5 deg and hypothermia, for most people they will
be fairly incapacitated . Some people will function at lower core
temps, but with no life jacket, probably difficult to keep head above
water. If they could hold onto something then probably
survive longer. A core temp of about 35 deg is about as long
as they will survive, because they cant keep their airway clear.
If a decent life jacket then about 25 deg core temp can go
down to. There are a few people who can go well
beyond that, like Anna Zeinhelm ? is one. She was resuccitated
after accidental hypothermia and a core temp of 13.7 deg
and she was submerged for about 45 minutes in ice-cold
water, and she survived.
An example of a boy fallen into ice water, he is found after
30 to 40 minutes. Pulled out, he had no heart-beat
and no brain activity , but he recovered. A few such
cases each year. Prof Tipton at Pompey uni , looked into
why that might be. Looking at the data of those who
survived immersion. The majority occur in very cold water
, the majority below 6 deg. So something about very
cold water that seems to protect individuals . Is it the
diving response , that diving mammals use. Its unlikely ,because its
fairly weak in humans. Its more developed in the likes of free-divers
but probably there will be cold-shock response occuring as well,
so autonomic conflict. Could it be selective brain-cooling.
This is what we think, but its not something you can do an
experiment with. Looking at the literatre on drowning .
They still make breathing movements under water and we think
, for very small individuals, water is flushed in and out of their
lungs and so selectively cooling the heart and brain.
It seems to occur only with very small adults or children.
We knw surface cooling would not be enough to cool
people down, from anaesthesia where they try to cool people
quickly , thru the skin will not cool the brain quick
enough. 2 deg decrease in core temp in 10 minutes ,
need to cool down to about 30 deg. Flushing the lungs with cold
water would enable that to happen.
Guidelines for RNLI and for firefighters, finding someone submerged.
First a risk-assesment . At 30 minutes of looking for the individual,
reassess the situation. Is the person small, is the water cold ,
how long have they been under. Guidelines are if cold water under
6 deg , small children , keep looking for up to 90 minutes
of submersion, because they might survive.
Looking at USA data on over 1000 victims and the water temp
involved, most was very cold . Cold they considered between 6 and 12
and very cold was below 6 deg. For the vast majority the outcome was death by
drowning. Of those who had good recovery, most were young ,
female and had been submerged for only 6 minutes. 45 minutes
survival is extremely rare and they didn't find any protective effect from cold.
Very rare for this cold-protective effect but its probably worht
looking for somebody out to 90 minutes, as long as balanced
against risk to the rescuers.
If you survived hypothermia, you have the rescue boat coming towards
you , you can still collapse. This is what we call circum-rescue
collapse. Pre-rescue they see someone coming to rescue them and they
stop trying to fight , adrenalin levels drop and that had been
supporting their blood pressure. Or it could be they try to help the
rescuers , up a net say , and with cold heart that then is too
much stress. In the water there is a hydrostatic pressure agsainst
you , pushing blood towards the heart. Then if they do a vertical lift
then their blood pressure will suddenly drop. They will not be able to
counter that as their baroreceptors are cold and don't respond.
Normally doing that to someone , who is not cold, their
baroceptors wil lbe activated , their heart rate will increase
and blood will return to heart and brain.
After rescue it could be after-drop which is where the core temp continues
to cool. Or it could be secondary drowning.
So much for the bad. Cold water can be used for protection.
It is used a lot in anaesthesia and operations . Cooling the body
reduces metabolic demands . So if they have to reduce the blood flow to a particuar
tissue , like for open-heart surgery, then you will get a better outcome.
People who are too hot, like the London marathon , the Bramleys?
under-estimating , we know placing such people in very cold water is a
very effective eay of quickly cooling people down.
Even dogs know that plunging into cold water will cool them down.
Athletes often seen jumping into ice-baths after doing their
exercise and it does seem to reduce inflammation aiding recovery.
But its not much better than massage or doing stretching.
It does improve function but only in the sense of comparing to
doing nothing. Most athletes will not do nothing after an exercise
regime. However if you stay in there too long you may end
up with a non-freezing cold injury, like trench-foot.
Internet searching on cold-water immersion benefits or showering, there is a
lot of claims there. Ther eis a bit of evidence of strengthrening the
immune system. Clense you circulation system , sounds great,
but nothing proven. Improve your blood circulation,
but lidar shows it decreases, because of vaso-constriction.
De-toxify your organs and provide fresh suply of blood for them
- fake news. Reduce blood pressure - acutely itwill increase
blood pressure , with repeated immersions it might be reduced.
Contracts your muscles - yes, its called sghivering . Eliminates
toxins - except for any skin surface contamination being washed off, then
no. Strenghens the nervous system , it will stimulatee the sympathetic nervous
system . Mostly a load of rubbish.
But returning to immune function. If we do repeated cold water
immersions over several months, we see slight changes in some immune
functions. Increasing T-cells and lymphocytes but not in most of the
immuno-globulins. Slight increase of some but any biological
importance ,questionable. People have taken repeated showers hot
and then cold and they found a self-reported reduction in sick-leave
from work but no change in illness.
Problem with self-reporting but there was a ot of people in that study.
Another study looked at cold-water swimmers, their rate
of upper respiratory tract infections, with co-habiting partners,
so exposed to much the same germs, but did no swimming.
Cold swimmers had fewer colds , but that was no different to
normal swimmers in normal swimming pools.
A lot of recent anecdotes relating the benefits of cold water
swimming and mental health, but currently just anecdotal evidence.
Heather Massey is involved with a couple of studies
exploring those claims. When we looks at outdoor swimming
, its not just the cold, there is the exercise component.
Then trying to tease out which bits are the most important.
On the other hand someone with mental health issues probably
doesn't care which bit is relevant, as long as overal it works.
On the dangerous side of things, that is where most of the research
has been done and we have good evidence there, on the negative
side of cold-water immersion. More work required on any
beneficial positive effects.
Go along a promenade or over a bridge and evey couple of
hundred there will be a lifebelt with a long length of polypropelene rope.
Polypropelene rope floats on water and even if untangled,
would not be long enough to be anchored at one end and through the
buoy to a person. Is there design there , in that a long length of such
rope gives a person a greater chance to hang onto something,
as you would not want to throw the bouy directly at someone.?
They also deploy throw-lines which are little more than
floating bits of rope , so I'm guessing its by design, for more
chance of having something to grab. It also helps a rescuer
to grab hold of something, other than just a bouy.
There was an Icelandic fisherman who survived a very long time
14 hours or so in 5 deg water, was much learned about his metabolism. ?
If you look at survival times in cold water , if you have a life jacket,
even down to 5 deg then 24 hours survival is possible. Then you are
starting to see non-thermal problems are having an effect.
He had ordinary clothes , but no shoes and he swam to shore 2 or 3 miles.?
This is where subcutaneous fat comes in, looking at people
who have done cross-channel swimming, they were all fairly
rotund and short in stature. The problem now with cold-water
swimming is its becoming very popular and so lean swimmers, ideal
for warm water swimming are now going into cold water
and they don't have the protection. The ideal cold water swimmer
has plenty of subcutaneous fat and plenty of muscle so they can
generate heat and swim, but also the fat to keep that heat in.
For the plots of core temp falling, the first person who
fell fastest was a very fit firefighter , he could swim incredibly fast
, but he just radiated out heat , heating the water very quickly.
Is there a difference between men and women?
Yes, but not enough research done. In particular the area of
health benefits, just one study and that showed significant
difference in their immune-function. For cold-water immersion , I'm
researching historical data and geting undergrads to look at
differences between men and women on the cold-shock response.
THe problem is we have such large variation in responses anyway,
that would mask perhaps relatively small differences between men and women,
so a large database required. For cooling , that is mainly differences
in body composition , so large surface area to body mass ratio of
women, greater body fat and the amount of heat they can produce that is
usually related to threir fitness levels, men tend to be fitter.
So lots of factors, it depends what question you're asking.
Is the average woman likely to cool faster than the average man
is a very different question from are there any sex differences.
Then you have to match for body-fat or match for fitness,
so a complicated question. There is also an age effect.
Older people cannot produce as much heat , usually due to
fitness levels but also with age you cannot vaso-constrict as well.
A lot of these can be offset by physical activity , and so if matching
with fitness levels, then those differences tend to disappear.
Have you looked at the inverse of cooling, speed of heating?
Like immersing hands in tanks and measuring recovery with
We do it as a measure of looking non-freezing cold injury.
Cronic symptoms of non-freezing cold injury , is they have very
poor circultion , so suscepltible to cold , their hands and feet would be
extremely cold in a cold environment. Like Reynard's. Our test
is conducted at 30 deg so most people have nicely
warm hands and feet . Otherwise they have very cold hands and
feet , given a cold challenge we look at how they re-warm.
After every cold water immersion we tend to put people in a warm
bath, to rewarm them as thats the most effective way. The people who
cooled slowly will rewarm very slowly, the quick coolers will
Have looked at differences between individuals and different
Yes, we've looked at different ethnicities and we find African
and Caribbean individuals vaso-constrict at the peripherals,
hands in particular at a warmer temp. So if doing a cooling
profile they vaso-constriict at a warmer temp and when you
start rewarming them , they wont open up until a higher temp.
We think they are more susceptible to cold injuries , because
they are shutting off the blood supply earlier.
We've looked at foot cooling in men and women and the main
difference is size of foot , small feet will cool sooner than
a large foot. When we can match females with big feet
or men with small feet we we find the same rate.
Please make emails plain text only , no more than 5KByte or 500 words.
Anyone sending larger texts or attachments such as digital signatures, pictures etc will have
them automatically deleted on the server. I will be totally unaware of this, all your email will be deleted - sorry, again
blame the spammers. If you suspect problems emailing me then please try using
keyword for searchengines , scicafshadow, scicafsoton, Southampton Science Café, Café Scientifique, scicaf, scicaf1, scicaf2
, free talks, open talks, free lectures, open lectures ,